00:00:00.000 Started by upstream project "spdk-dpdk-per-patch" build number 294 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.126 Using shallow fetch with depth 1 00:00:00.126 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.126 > git --version # timeout=10 00:00:00.186 > git --version # 'git version 2.39.2' 00:00:00.186 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.739 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.751 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.762 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.762 > git config core.sparsecheckout # timeout=10 00:00:05.772 > git read-tree -mu HEAD # timeout=10 00:00:05.787 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.807 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.807 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.939 [Pipeline] Start of Pipeline 00:00:05.953 [Pipeline] library 00:00:05.955 Loading library shm_lib@master 00:00:05.955 Library shm_lib@master is cached. Copying from home. 00:00:05.968 [Pipeline] node 00:00:05.983 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.985 [Pipeline] { 00:00:05.995 [Pipeline] catchError 00:00:05.997 [Pipeline] { 00:00:06.008 [Pipeline] wrap 00:00:06.016 [Pipeline] { 00:00:06.023 [Pipeline] stage 00:00:06.025 [Pipeline] { (Prologue) 00:00:06.041 [Pipeline] echo 00:00:06.042 Node: VM-host-WFP7 00:00:06.049 [Pipeline] cleanWs 00:00:06.059 [WS-CLEANUP] Deleting project workspace... 00:00:06.059 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.066 [WS-CLEANUP] done 00:00:06.266 [Pipeline] setCustomBuildProperty 00:00:06.340 [Pipeline] httpRequest 00:00:06.755 [Pipeline] echo 00:00:06.757 Sorcerer 10.211.164.101 is alive 00:00:06.765 [Pipeline] retry 00:00:06.766 [Pipeline] { 00:00:06.775 [Pipeline] httpRequest 00:00:06.779 HttpMethod: GET 00:00:06.780 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.781 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.791 Response Code: HTTP/1.1 200 OK 00:00:06.791 Success: Status code 200 is in the accepted range: 200,404 00:00:06.792 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.009 [Pipeline] } 00:00:11.026 [Pipeline] // retry 00:00:11.034 [Pipeline] sh 00:00:11.318 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.334 [Pipeline] httpRequest 00:00:11.692 [Pipeline] echo 00:00:11.694 Sorcerer 10.211.164.101 is alive 00:00:11.702 [Pipeline] retry 00:00:11.704 [Pipeline] { 00:00:11.717 [Pipeline] httpRequest 00:00:11.722 HttpMethod: GET 00:00:11.722 URL: http://10.211.164.101/packages/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:11.723 Sending request to url: http://10.211.164.101/packages/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:11.745 Response Code: HTTP/1.1 200 OK 00:00:11.746 Success: Status code 200 is in the accepted range: 200,404 00:00:11.746 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:01:21.808 [Pipeline] } 00:01:21.823 [Pipeline] // retry 00:01:21.829 [Pipeline] sh 00:01:22.107 + tar --no-same-owner -xf spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:01:24.652 [Pipeline] sh 00:01:24.962 + git -C spdk log --oneline -n5 00:01:24.962 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:24.962 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:24.962 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:24.962 412fced1b bdev/compress: unmap support. 00:01:24.962 3791dfc65 nvme: rename spdk_nvme_ctrlr_aer_completion_list 00:01:24.976 [Pipeline] sh 00:01:25.261 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/86/24686/3 00:01:26.638 From https://review.spdk.io/gerrit/spdk/dpdk 00:01:26.638 * branch refs/changes/86/24686/3 -> FETCH_HEAD 00:01:26.650 [Pipeline] sh 00:01:26.931 + git -C spdk/dpdk checkout FETCH_HEAD 00:01:27.501 Previous HEAD position was 8d8db71763 eal/alarm_cancel: Fix thread starvation 00:01:27.501 HEAD is now at ad6cb6153f bus/pci: don't open uio device in secondary process 00:01:27.518 [Pipeline] writeFile 00:01:27.532 [Pipeline] sh 00:01:27.816 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:27.828 [Pipeline] sh 00:01:28.163 + cat autorun-spdk.conf 00:01:28.163 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.163 SPDK_RUN_ASAN=1 00:01:28.163 SPDK_RUN_UBSAN=1 00:01:28.163 SPDK_TEST_RAID=1 00:01:28.163 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.170 RUN_NIGHTLY= 00:01:28.172 [Pipeline] } 00:01:28.185 [Pipeline] // stage 00:01:28.200 [Pipeline] stage 00:01:28.203 [Pipeline] { (Run VM) 00:01:28.213 [Pipeline] sh 00:01:28.494 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:28.494 + echo 'Start stage prepare_nvme.sh' 00:01:28.494 Start stage prepare_nvme.sh 00:01:28.494 + [[ -n 3 ]] 00:01:28.494 + disk_prefix=ex3 00:01:28.494 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:28.494 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:28.494 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:28.494 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.494 ++ SPDK_RUN_ASAN=1 00:01:28.494 ++ SPDK_RUN_UBSAN=1 00:01:28.494 ++ SPDK_TEST_RAID=1 00:01:28.494 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.494 ++ RUN_NIGHTLY= 00:01:28.494 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:28.494 + nvme_files=() 00:01:28.494 + declare -A nvme_files 00:01:28.494 + backend_dir=/var/lib/libvirt/images/backends 00:01:28.494 + nvme_files['nvme.img']=5G 00:01:28.494 + nvme_files['nvme-cmb.img']=5G 00:01:28.494 + nvme_files['nvme-multi0.img']=4G 00:01:28.494 + nvme_files['nvme-multi1.img']=4G 00:01:28.495 + nvme_files['nvme-multi2.img']=4G 00:01:28.495 + nvme_files['nvme-openstack.img']=8G 00:01:28.495 + nvme_files['nvme-zns.img']=5G 00:01:28.495 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:28.495 + (( SPDK_TEST_FTL == 1 )) 00:01:28.495 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:28.495 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.495 + for nvme in "${!nvme_files[@]}" 00:01:28.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:28.495 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.754 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:28.754 + echo 'End stage prepare_nvme.sh' 00:01:28.754 End stage prepare_nvme.sh 00:01:28.766 [Pipeline] sh 00:01:29.049 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:29.049 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:29.049 00:01:29.049 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:29.049 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:29.049 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:29.049 HELP=0 00:01:29.049 DRY_RUN=0 00:01:29.049 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:29.049 NVME_DISKS_TYPE=nvme,nvme, 00:01:29.049 NVME_AUTO_CREATE=0 00:01:29.049 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:29.049 NVME_CMB=,, 00:01:29.049 NVME_PMR=,, 00:01:29.049 NVME_ZNS=,, 00:01:29.049 NVME_MS=,, 00:01:29.049 NVME_FDP=,, 00:01:29.049 SPDK_VAGRANT_DISTRO=fedora39 00:01:29.049 SPDK_VAGRANT_VMCPU=10 00:01:29.049 SPDK_VAGRANT_VMRAM=12288 00:01:29.049 SPDK_VAGRANT_PROVIDER=libvirt 00:01:29.049 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:29.049 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:29.049 SPDK_OPENSTACK_NETWORK=0 00:01:29.049 VAGRANT_PACKAGE_BOX=0 00:01:29.049 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:29.049 FORCE_DISTRO=true 00:01:29.049 VAGRANT_BOX_VERSION= 00:01:29.049 EXTRA_VAGRANTFILES= 00:01:29.049 NIC_MODEL=virtio 00:01:29.049 00:01:29.049 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:29.049 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:30.956 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.524 ==> default: Creating image (snapshot of base box volume). 00:01:31.524 ==> default: Creating domain with the following settings... 00:01:31.524 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728639315_2fd10a0cbdcfb2105764 00:01:31.524 ==> default: -- Domain type: kvm 00:01:31.524 ==> default: -- Cpus: 10 00:01:31.524 ==> default: -- Feature: acpi 00:01:31.524 ==> default: -- Feature: apic 00:01:31.524 ==> default: -- Feature: pae 00:01:31.524 ==> default: -- Memory: 12288M 00:01:31.524 ==> default: -- Memory Backing: hugepages: 00:01:31.524 ==> default: -- Management MAC: 00:01:31.524 ==> default: -- Loader: 00:01:31.524 ==> default: -- Nvram: 00:01:31.524 ==> default: -- Base box: spdk/fedora39 00:01:31.524 ==> default: -- Storage pool: default 00:01:31.524 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728639315_2fd10a0cbdcfb2105764.img (20G) 00:01:31.524 ==> default: -- Volume Cache: default 00:01:31.524 ==> default: -- Kernel: 00:01:31.524 ==> default: -- Initrd: 00:01:31.524 ==> default: -- Graphics Type: vnc 00:01:31.524 ==> default: -- Graphics Port: -1 00:01:31.524 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.524 ==> default: -- Graphics Password: Not defined 00:01:31.524 ==> default: -- Video Type: cirrus 00:01:31.524 ==> default: -- Video VRAM: 9216 00:01:31.524 ==> default: -- Sound Type: 00:01:31.524 ==> default: -- Keymap: en-us 00:01:31.524 ==> default: -- TPM Path: 00:01:31.524 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.524 ==> default: -- Command line args: 00:01:31.524 ==> default: -> value=-device, 00:01:31.524 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:31.524 ==> default: -> value=-drive, 00:01:31.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:31.524 ==> default: -> value=-device, 00:01:31.524 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.524 ==> default: -> value=-device, 00:01:31.524 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:31.524 ==> default: -> value=-drive, 00:01:31.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:31.524 ==> default: -> value=-device, 00:01:31.524 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.524 ==> default: -> value=-drive, 00:01:31.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:31.524 ==> default: -> value=-device, 00:01:31.524 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.524 ==> default: -> value=-drive, 00:01:31.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:31.524 ==> default: -> value=-device, 00:01:31.524 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.524 ==> default: Creating shared folders metadata... 00:01:31.783 ==> default: Starting domain. 00:01:32.717 ==> default: Waiting for domain to get an IP address... 00:01:50.817 ==> default: Waiting for SSH to become available... 00:01:50.817 ==> default: Configuring and enabling network interfaces... 00:01:57.411 default: SSH address: 192.168.121.78:22 00:01:57.411 default: SSH username: vagrant 00:01:57.411 default: SSH auth method: private key 00:02:00.708 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:08.834 ==> default: Mounting SSHFS shared folder... 00:02:11.373 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:11.373 ==> default: Checking Mount.. 00:02:13.279 ==> default: Folder Successfully Mounted! 00:02:13.279 ==> default: Running provisioner: file... 00:02:14.220 default: ~/.gitconfig => .gitconfig 00:02:14.811 00:02:14.811 SUCCESS! 00:02:14.811 00:02:14.811 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:14.811 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:14.811 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:14.811 00:02:14.820 [Pipeline] } 00:02:14.834 [Pipeline] // stage 00:02:14.843 [Pipeline] dir 00:02:14.844 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:14.845 [Pipeline] { 00:02:14.857 [Pipeline] catchError 00:02:14.858 [Pipeline] { 00:02:14.870 [Pipeline] sh 00:02:15.152 + vagrant ssh-config --host vagrant 00:02:15.152 + sed -ne /^Host/,$p 00:02:15.152 + tee ssh_conf 00:02:17.689 Host vagrant 00:02:17.689 HostName 192.168.121.78 00:02:17.689 User vagrant 00:02:17.689 Port 22 00:02:17.689 UserKnownHostsFile /dev/null 00:02:17.689 StrictHostKeyChecking no 00:02:17.689 PasswordAuthentication no 00:02:17.689 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:17.689 IdentitiesOnly yes 00:02:17.689 LogLevel FATAL 00:02:17.689 ForwardAgent yes 00:02:17.689 ForwardX11 yes 00:02:17.689 00:02:17.704 [Pipeline] withEnv 00:02:17.706 [Pipeline] { 00:02:17.720 [Pipeline] sh 00:02:18.012 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:18.012 source /etc/os-release 00:02:18.012 [[ -e /image.version ]] && img=$(< /image.version) 00:02:18.012 # Minimal, systemd-like check. 00:02:18.012 if [[ -e /.dockerenv ]]; then 00:02:18.012 # Clear garbage from the node's name: 00:02:18.012 # agt-er_autotest_547-896 -> autotest_547-896 00:02:18.012 # $HOSTNAME is the actual container id 00:02:18.012 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:18.012 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:18.012 # We can assume this is a mount from a host where container is running, 00:02:18.012 # so fetch its hostname to easily identify the target swarm worker. 00:02:18.012 container="$(< /etc/hostname) ($agent)" 00:02:18.012 else 00:02:18.012 # Fallback 00:02:18.012 container=$agent 00:02:18.012 fi 00:02:18.012 fi 00:02:18.012 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:18.012 00:02:18.295 [Pipeline] } 00:02:18.311 [Pipeline] // withEnv 00:02:18.319 [Pipeline] setCustomBuildProperty 00:02:18.334 [Pipeline] stage 00:02:18.336 [Pipeline] { (Tests) 00:02:18.353 [Pipeline] sh 00:02:18.636 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:18.911 [Pipeline] sh 00:02:19.196 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.471 [Pipeline] timeout 00:02:19.471 Timeout set to expire in 1 hr 30 min 00:02:19.473 [Pipeline] { 00:02:19.487 [Pipeline] sh 00:02:19.770 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.341 HEAD is now at 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:02:20.354 [Pipeline] sh 00:02:20.640 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:20.915 [Pipeline] sh 00:02:21.201 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.477 [Pipeline] sh 00:02:21.763 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:22.023 ++ readlink -f spdk_repo 00:02:22.023 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.023 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.023 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.023 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.023 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.023 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.023 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.023 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:22.023 + cd /home/vagrant/spdk_repo 00:02:22.023 + source /etc/os-release 00:02:22.023 ++ NAME='Fedora Linux' 00:02:22.023 ++ VERSION='39 (Cloud Edition)' 00:02:22.023 ++ ID=fedora 00:02:22.023 ++ VERSION_ID=39 00:02:22.023 ++ VERSION_CODENAME= 00:02:22.023 ++ PLATFORM_ID=platform:f39 00:02:22.023 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.023 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.023 ++ LOGO=fedora-logo-icon 00:02:22.023 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.023 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.023 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.023 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.023 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.023 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.023 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.023 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.023 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.023 ++ SUPPORT_END=2024-11-12 00:02:22.023 ++ VARIANT='Cloud Edition' 00:02:22.023 ++ VARIANT_ID=cloud 00:02:22.023 + uname -a 00:02:22.023 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.023 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:22.592 Hugepages 00:02:22.592 node hugesize free / total 00:02:22.592 node0 1048576kB 0 / 0 00:02:22.592 node0 2048kB 0 / 0 00:02:22.592 00:02:22.592 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.592 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.592 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.852 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.852 + rm -f /tmp/spdk-ld-path 00:02:22.852 + source autorun-spdk.conf 00:02:22.852 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.852 ++ SPDK_RUN_ASAN=1 00:02:22.852 ++ SPDK_RUN_UBSAN=1 00:02:22.852 ++ SPDK_TEST_RAID=1 00:02:22.852 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.852 ++ RUN_NIGHTLY= 00:02:22.852 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.852 + [[ -n '' ]] 00:02:22.852 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.852 + for M in /var/spdk/build-*-manifest.txt 00:02:22.852 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.852 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.852 + for M in /var/spdk/build-*-manifest.txt 00:02:22.852 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.852 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.852 + for M in /var/spdk/build-*-manifest.txt 00:02:22.852 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.852 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.852 ++ uname 00:02:22.852 + [[ Linux == \L\i\n\u\x ]] 00:02:22.852 + sudo dmesg -T 00:02:22.852 + sudo dmesg --clear 00:02:22.852 + dmesg_pid=5429 00:02:22.852 + [[ Fedora Linux == FreeBSD ]] 00:02:22.852 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.852 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.852 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.852 + sudo dmesg -Tw 00:02:22.852 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.852 + export FIO_BIN=/usr/src/fio-static/fio 00:02:22.852 + FIO_BIN=/usr/src/fio-static/fio 00:02:22.852 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.852 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.852 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.852 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.852 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.852 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.852 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.852 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.852 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:22.852 Test configuration: 00:02:22.852 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.852 SPDK_RUN_ASAN=1 00:02:22.852 SPDK_RUN_UBSAN=1 00:02:22.852 SPDK_TEST_RAID=1 00:02:22.852 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.112 RUN_NIGHTLY= 09:36:07 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:23.112 09:36:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.112 09:36:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.112 09:36:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.112 09:36:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.112 09:36:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.112 09:36:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.112 09:36:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.112 09:36:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.112 09:36:07 -- paths/export.sh@5 -- $ export PATH 00:02:23.112 09:36:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.112 09:36:07 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.112 09:36:07 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:23.112 09:36:07 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728639367.XXXXXX 00:02:23.112 09:36:07 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728639367.W6nrHq 00:02:23.112 09:36:07 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:23.112 09:36:07 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:23.112 09:36:07 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:23.112 09:36:07 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.112 09:36:07 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.112 09:36:07 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:23.112 09:36:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:23.112 09:36:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.112 09:36:07 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:23.112 09:36:07 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:23.112 09:36:07 -- pm/common@17 -- $ local monitor 00:02:23.112 09:36:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.112 09:36:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.112 09:36:07 -- pm/common@25 -- $ sleep 1 00:02:23.112 09:36:07 -- pm/common@21 -- $ date +%s 00:02:23.112 09:36:07 -- pm/common@21 -- $ date +%s 00:02:23.112 09:36:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728639367 00:02:23.112 09:36:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728639367 00:02:23.112 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728639367_collect-cpu-load.pm.log 00:02:23.112 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728639367_collect-vmstat.pm.log 00:02:24.060 09:36:08 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:24.060 09:36:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.060 09:36:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.060 09:36:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.060 09:36:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.060 Fri Oct 11 09:36:08 AM UTC 2024 00:02:24.060 09:36:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.060 v25.01-pre-54-g5031f0f3b 00:02:24.060 09:36:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:24.060 09:36:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:24.060 09:36:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.060 09:36:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.060 09:36:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.060 ************************************ 00:02:24.060 START TEST asan 00:02:24.060 ************************************ 00:02:24.060 using asan 00:02:24.060 09:36:08 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:24.060 00:02:24.060 real 0m0.000s 00:02:24.060 user 0m0.000s 00:02:24.060 sys 0m0.000s 00:02:24.060 09:36:08 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.060 09:36:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.060 ************************************ 00:02:24.060 END TEST asan 00:02:24.060 ************************************ 00:02:24.060 09:36:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.060 09:36:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.060 09:36:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.060 09:36:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.060 09:36:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.319 ************************************ 00:02:24.319 START TEST ubsan 00:02:24.319 ************************************ 00:02:24.319 using ubsan 00:02:24.319 09:36:08 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:24.319 00:02:24.319 real 0m0.001s 00:02:24.319 user 0m0.001s 00:02:24.319 sys 0m0.000s 00:02:24.319 09:36:08 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.319 09:36:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.319 ************************************ 00:02:24.319 END TEST ubsan 00:02:24.319 ************************************ 00:02:24.319 09:36:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:24.319 09:36:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.319 09:36:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.319 09:36:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.319 09:36:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.319 09:36:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.319 09:36:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.319 09:36:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.319 09:36:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:24.319 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:24.319 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.888 Using 'verbs' RDMA provider 00:02:43.948 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:58.859 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:58.859 Creating mk/config.mk...done. 00:02:58.859 Creating mk/cc.flags.mk...done. 00:02:58.859 Type 'make' to build. 00:02:58.859 09:36:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:58.859 09:36:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:58.859 09:36:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:58.859 09:36:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.859 ************************************ 00:02:58.859 START TEST make 00:02:58.859 ************************************ 00:02:58.859 09:36:42 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:58.859 make[1]: Nothing to be done for 'all'. 00:03:13.790 The Meson build system 00:03:13.790 Version: 1.5.0 00:03:13.790 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:13.790 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:13.790 Build type: native build 00:03:13.790 Program cat found: YES (/usr/bin/cat) 00:03:13.790 Project name: DPDK 00:03:13.790 Project version: 24.07.0 00:03:13.790 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:13.790 C linker for the host machine: cc ld.bfd 2.40-14 00:03:13.790 Host machine cpu family: x86_64 00:03:13.790 Host machine cpu: x86_64 00:03:13.790 Message: ## Building in Developer Mode ## 00:03:13.790 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:13.790 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:13.790 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:13.790 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:03:13.790 Program cat found: YES (/usr/bin/cat) 00:03:13.790 Compiler for C supports arguments -march=native: YES 00:03:13.790 Checking for size of "void *" : 8 00:03:13.790 Checking for size of "void *" : 8 (cached) 00:03:13.790 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:13.790 Library m found: YES 00:03:13.790 Library numa found: YES 00:03:13.790 Has header "numaif.h" : YES 00:03:13.790 Library fdt found: NO 00:03:13.790 Library execinfo found: NO 00:03:13.790 Has header "execinfo.h" : YES 00:03:13.790 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:13.790 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:13.790 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:13.790 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:13.790 Run-time dependency openssl found: YES 3.1.1 00:03:13.790 Run-time dependency libpcap found: YES 1.10.4 00:03:13.790 Has header "pcap.h" with dependency libpcap: YES 00:03:13.790 Compiler for C supports arguments -Wcast-qual: YES 00:03:13.790 Compiler for C supports arguments -Wdeprecated: YES 00:03:13.790 Compiler for C supports arguments -Wformat: YES 00:03:13.790 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:13.790 Compiler for C supports arguments -Wformat-security: NO 00:03:13.790 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.790 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:13.790 Compiler for C supports arguments -Wnested-externs: YES 00:03:13.790 Compiler for C supports arguments -Wold-style-definition: YES 00:03:13.790 Compiler for C supports arguments -Wpointer-arith: YES 00:03:13.790 Compiler for C supports arguments -Wsign-compare: YES 00:03:13.790 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:13.790 Compiler for C supports arguments -Wundef: YES 00:03:13.790 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.790 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:13.790 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:13.790 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.790 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:13.790 Program objdump found: YES (/usr/bin/objdump) 00:03:13.790 Compiler for C supports arguments -mavx512f: YES 00:03:13.790 Checking if "AVX512 checking" compiles: YES 00:03:13.790 Fetching value of define "__SSE4_2__" : 1 00:03:13.790 Fetching value of define "__AES__" : 1 00:03:13.790 Fetching value of define "__AVX__" : 1 00:03:13.790 Fetching value of define "__AVX2__" : 1 00:03:13.790 Fetching value of define "__AVX512BW__" : 1 00:03:13.790 Fetching value of define "__AVX512CD__" : 1 00:03:13.790 Fetching value of define "__AVX512DQ__" : 1 00:03:13.790 Fetching value of define "__AVX512F__" : 1 00:03:13.790 Fetching value of define "__AVX512VL__" : 1 00:03:13.790 Fetching value of define "__PCLMUL__" : 1 00:03:13.790 Fetching value of define "__RDRND__" : 1 00:03:13.790 Fetching value of define "__RDSEED__" : 1 00:03:13.790 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:13.790 Fetching value of define "__znver1__" : (undefined) 00:03:13.790 Fetching value of define "__znver2__" : (undefined) 00:03:13.790 Fetching value of define "__znver3__" : (undefined) 00:03:13.790 Fetching value of define "__znver4__" : (undefined) 00:03:13.790 Library asan found: YES 00:03:13.790 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:13.790 Message: lib/log: Defining dependency "log" 00:03:13.790 Message: lib/kvargs: Defining dependency "kvargs" 00:03:13.790 Message: lib/telemetry: Defining dependency "telemetry" 00:03:13.790 Library rt found: YES 00:03:13.790 Checking for function "getentropy" : NO 00:03:13.790 Message: lib/eal: Defining dependency "eal" 00:03:13.790 Message: lib/ring: Defining dependency "ring" 00:03:13.790 Message: lib/rcu: Defining dependency "rcu" 00:03:13.790 Message: lib/mempool: Defining dependency "mempool" 00:03:13.790 Message: lib/mbuf: Defining dependency "mbuf" 00:03:13.790 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:13.790 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:13.790 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:13.790 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:13.790 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:13.790 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:13.790 Compiler for C supports arguments -mpclmul: YES 00:03:13.790 Compiler for C supports arguments -maes: YES 00:03:13.790 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:13.790 Compiler for C supports arguments -mavx512bw: YES 00:03:13.790 Compiler for C supports arguments -mavx512dq: YES 00:03:13.790 Compiler for C supports arguments -mavx512vl: YES 00:03:13.790 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:13.790 Compiler for C supports arguments -mavx2: YES 00:03:13.790 Compiler for C supports arguments -mavx: YES 00:03:13.790 Message: lib/net: Defining dependency "net" 00:03:13.790 Message: lib/meter: Defining dependency "meter" 00:03:13.790 Message: lib/ethdev: Defining dependency "ethdev" 00:03:13.790 Message: lib/pci: Defining dependency "pci" 00:03:13.790 Message: lib/cmdline: Defining dependency "cmdline" 00:03:13.790 Message: lib/hash: Defining dependency "hash" 00:03:13.790 Message: lib/timer: Defining dependency "timer" 00:03:13.791 Message: lib/compressdev: Defining dependency "compressdev" 00:03:13.791 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:13.791 Message: lib/dmadev: Defining dependency "dmadev" 00:03:13.791 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:13.791 Message: lib/power: Defining dependency "power" 00:03:13.791 Message: lib/reorder: Defining dependency "reorder" 00:03:13.791 Message: lib/security: Defining dependency "security" 00:03:13.791 Has header "linux/userfaultfd.h" : YES 00:03:13.791 Has header "linux/vduse.h" : YES 00:03:13.791 Message: lib/vhost: Defining dependency "vhost" 00:03:13.791 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:13.791 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:13.791 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:13.791 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:13.791 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:13.791 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:13.791 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:13.791 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:13.791 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:13.791 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:13.791 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:13.791 Configuring doxy-api-html.conf using configuration 00:03:13.791 Configuring doxy-api-man.conf using configuration 00:03:13.791 Program mandb found: YES (/usr/bin/mandb) 00:03:13.791 Program sphinx-build found: NO 00:03:13.791 Configuring rte_build_config.h using configuration 00:03:13.791 Message: 00:03:13.791 ================= 00:03:13.791 Applications Enabled 00:03:13.791 ================= 00:03:13.791 00:03:13.791 apps: 00:03:13.791 00:03:13.791 00:03:13.791 Message: 00:03:13.791 ================= 00:03:13.791 Libraries Enabled 00:03:13.791 ================= 00:03:13.791 00:03:13.791 libs: 00:03:13.791 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:13.791 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:13.791 cryptodev, dmadev, power, reorder, security, vhost, 00:03:13.791 00:03:13.791 Message: 00:03:13.791 =============== 00:03:13.791 Drivers Enabled 00:03:13.791 =============== 00:03:13.791 00:03:13.791 common: 00:03:13.791 00:03:13.791 bus: 00:03:13.791 pci, vdev, 00:03:13.791 mempool: 00:03:13.791 ring, 00:03:13.791 dma: 00:03:13.791 00:03:13.791 net: 00:03:13.791 00:03:13.791 crypto: 00:03:13.791 00:03:13.791 compress: 00:03:13.791 00:03:13.791 vdpa: 00:03:13.791 00:03:13.791 00:03:13.791 Message: 00:03:13.791 ================= 00:03:13.791 Content Skipped 00:03:13.791 ================= 00:03:13.791 00:03:13.791 apps: 00:03:13.791 dumpcap: explicitly disabled via build config 00:03:13.791 graph: explicitly disabled via build config 00:03:13.791 pdump: explicitly disabled via build config 00:03:13.791 proc-info: explicitly disabled via build config 00:03:13.791 test-acl: explicitly disabled via build config 00:03:13.791 test-bbdev: explicitly disabled via build config 00:03:13.791 test-cmdline: explicitly disabled via build config 00:03:13.791 test-compress-perf: explicitly disabled via build config 00:03:13.791 test-crypto-perf: explicitly disabled via build config 00:03:13.791 test-dma-perf: explicitly disabled via build config 00:03:13.791 test-eventdev: explicitly disabled via build config 00:03:13.791 test-fib: explicitly disabled via build config 00:03:13.791 test-flow-perf: explicitly disabled via build config 00:03:13.791 test-gpudev: explicitly disabled via build config 00:03:13.791 test-mldev: explicitly disabled via build config 00:03:13.791 test-pipeline: explicitly disabled via build config 00:03:13.791 test-pmd: explicitly disabled via build config 00:03:13.791 test-regex: explicitly disabled via build config 00:03:13.791 test-sad: explicitly disabled via build config 00:03:13.791 test-security-perf: explicitly disabled via build config 00:03:13.791 00:03:13.791 libs: 00:03:13.791 argparse: explicitly disabled via build config 00:03:13.791 ptr_compress: explicitly disabled via build config 00:03:13.791 metrics: explicitly disabled via build config 00:03:13.791 acl: explicitly disabled via build config 00:03:13.791 bbdev: explicitly disabled via build config 00:03:13.791 bitratestats: explicitly disabled via build config 00:03:13.791 bpf: explicitly disabled via build config 00:03:13.791 cfgfile: explicitly disabled via build config 00:03:13.791 distributor: explicitly disabled via build config 00:03:13.791 efd: explicitly disabled via build config 00:03:13.791 eventdev: explicitly disabled via build config 00:03:13.791 dispatcher: explicitly disabled via build config 00:03:13.791 gpudev: explicitly disabled via build config 00:03:13.791 gro: explicitly disabled via build config 00:03:13.791 gso: explicitly disabled via build config 00:03:13.791 ip_frag: explicitly disabled via build config 00:03:13.791 jobstats: explicitly disabled via build config 00:03:13.791 latencystats: explicitly disabled via build config 00:03:13.791 lpm: explicitly disabled via build config 00:03:13.791 member: explicitly disabled via build config 00:03:13.791 pcapng: explicitly disabled via build config 00:03:13.791 rawdev: explicitly disabled via build config 00:03:13.791 regexdev: explicitly disabled via build config 00:03:13.791 mldev: explicitly disabled via build config 00:03:13.791 rib: explicitly disabled via build config 00:03:13.791 sched: explicitly disabled via build config 00:03:13.791 stack: explicitly disabled via build config 00:03:13.791 ipsec: explicitly disabled via build config 00:03:13.791 pdcp: explicitly disabled via build config 00:03:13.791 fib: explicitly disabled via build config 00:03:13.791 port: explicitly disabled via build config 00:03:13.791 pdump: explicitly disabled via build config 00:03:13.791 table: explicitly disabled via build config 00:03:13.791 pipeline: explicitly disabled via build config 00:03:13.791 graph: explicitly disabled via build config 00:03:13.791 node: explicitly disabled via build config 00:03:13.791 00:03:13.791 drivers: 00:03:13.791 common/cpt: not in enabled drivers build config 00:03:13.791 common/dpaax: not in enabled drivers build config 00:03:13.791 common/iavf: not in enabled drivers build config 00:03:13.791 common/idpf: not in enabled drivers build config 00:03:13.791 common/ionic: not in enabled drivers build config 00:03:13.791 common/mvep: not in enabled drivers build config 00:03:13.791 common/octeontx: not in enabled drivers build config 00:03:13.791 bus/auxiliary: not in enabled drivers build config 00:03:13.791 bus/cdx: not in enabled drivers build config 00:03:13.791 bus/dpaa: not in enabled drivers build config 00:03:13.791 bus/fslmc: not in enabled drivers build config 00:03:13.791 bus/ifpga: not in enabled drivers build config 00:03:13.791 bus/platform: not in enabled drivers build config 00:03:13.791 bus/uacce: not in enabled drivers build config 00:03:13.791 bus/vmbus: not in enabled drivers build config 00:03:13.791 common/cnxk: not in enabled drivers build config 00:03:13.791 common/mlx5: not in enabled drivers build config 00:03:13.791 common/nfp: not in enabled drivers build config 00:03:13.791 common/nitrox: not in enabled drivers build config 00:03:13.791 common/qat: not in enabled drivers build config 00:03:13.791 common/sfc_efx: not in enabled drivers build config 00:03:13.791 mempool/bucket: not in enabled drivers build config 00:03:13.791 mempool/cnxk: not in enabled drivers build config 00:03:13.791 mempool/dpaa: not in enabled drivers build config 00:03:13.791 mempool/dpaa2: not in enabled drivers build config 00:03:13.791 mempool/octeontx: not in enabled drivers build config 00:03:13.791 mempool/stack: not in enabled drivers build config 00:03:13.791 dma/cnxk: not in enabled drivers build config 00:03:13.791 dma/dpaa: not in enabled drivers build config 00:03:13.791 dma/dpaa2: not in enabled drivers build config 00:03:13.791 dma/hisilicon: not in enabled drivers build config 00:03:13.791 dma/idxd: not in enabled drivers build config 00:03:13.791 dma/ioat: not in enabled drivers build config 00:03:13.791 dma/odm: not in enabled drivers build config 00:03:13.791 dma/skeleton: not in enabled drivers build config 00:03:13.791 net/af_packet: not in enabled drivers build config 00:03:13.791 net/af_xdp: not in enabled drivers build config 00:03:13.791 net/ark: not in enabled drivers build config 00:03:13.791 net/atlantic: not in enabled drivers build config 00:03:13.791 net/avp: not in enabled drivers build config 00:03:13.791 net/axgbe: not in enabled drivers build config 00:03:13.791 net/bnx2x: not in enabled drivers build config 00:03:13.792 net/bnxt: not in enabled drivers build config 00:03:13.792 net/bonding: not in enabled drivers build config 00:03:13.792 net/cnxk: not in enabled drivers build config 00:03:13.792 net/cpfl: not in enabled drivers build config 00:03:13.792 net/cxgbe: not in enabled drivers build config 00:03:13.792 net/dpaa: not in enabled drivers build config 00:03:13.792 net/dpaa2: not in enabled drivers build config 00:03:13.792 net/e1000: not in enabled drivers build config 00:03:13.792 net/ena: not in enabled drivers build config 00:03:13.792 net/enetc: not in enabled drivers build config 00:03:13.792 net/enetfec: not in enabled drivers build config 00:03:13.792 net/enic: not in enabled drivers build config 00:03:13.792 net/failsafe: not in enabled drivers build config 00:03:13.792 net/fm10k: not in enabled drivers build config 00:03:13.792 net/gve: not in enabled drivers build config 00:03:13.792 net/hinic: not in enabled drivers build config 00:03:13.792 net/hns3: not in enabled drivers build config 00:03:13.792 net/i40e: not in enabled drivers build config 00:03:13.792 net/iavf: not in enabled drivers build config 00:03:13.792 net/ice: not in enabled drivers build config 00:03:13.792 net/idpf: not in enabled drivers build config 00:03:13.792 net/igc: not in enabled drivers build config 00:03:13.792 net/ionic: not in enabled drivers build config 00:03:13.792 net/ipn3ke: not in enabled drivers build config 00:03:13.792 net/ixgbe: not in enabled drivers build config 00:03:13.792 net/mana: not in enabled drivers build config 00:03:13.792 net/memif: not in enabled drivers build config 00:03:13.792 net/mlx4: not in enabled drivers build config 00:03:13.792 net/mlx5: not in enabled drivers build config 00:03:13.792 net/mvneta: not in enabled drivers build config 00:03:13.792 net/mvpp2: not in enabled drivers build config 00:03:13.792 net/netvsc: not in enabled drivers build config 00:03:13.792 net/nfb: not in enabled drivers build config 00:03:13.792 net/nfp: not in enabled drivers build config 00:03:13.792 net/ngbe: not in enabled drivers build config 00:03:13.792 net/ntnic: not in enabled drivers build config 00:03:13.792 net/null: not in enabled drivers build config 00:03:13.792 net/octeontx: not in enabled drivers build config 00:03:13.792 net/octeon_ep: not in enabled drivers build config 00:03:13.792 net/pcap: not in enabled drivers build config 00:03:13.792 net/pfe: not in enabled drivers build config 00:03:13.792 net/qede: not in enabled drivers build config 00:03:13.792 net/ring: not in enabled drivers build config 00:03:13.792 net/sfc: not in enabled drivers build config 00:03:13.792 net/softnic: not in enabled drivers build config 00:03:13.792 net/tap: not in enabled drivers build config 00:03:13.792 net/thunderx: not in enabled drivers build config 00:03:13.792 net/txgbe: not in enabled drivers build config 00:03:13.792 net/vdev_netvsc: not in enabled drivers build config 00:03:13.792 net/vhost: not in enabled drivers build config 00:03:13.792 net/virtio: not in enabled drivers build config 00:03:13.792 net/vmxnet3: not in enabled drivers build config 00:03:13.792 raw/*: missing internal dependency, "rawdev" 00:03:13.792 crypto/armv8: not in enabled drivers build config 00:03:13.792 crypto/bcmfs: not in enabled drivers build config 00:03:13.792 crypto/caam_jr: not in enabled drivers build config 00:03:13.792 crypto/ccp: not in enabled drivers build config 00:03:13.792 crypto/cnxk: not in enabled drivers build config 00:03:13.792 crypto/dpaa_sec: not in enabled drivers build config 00:03:13.792 crypto/dpaa2_sec: not in enabled drivers build config 00:03:13.792 crypto/ionic: not in enabled drivers build config 00:03:13.792 crypto/ipsec_mb: not in enabled drivers build config 00:03:13.792 crypto/mlx5: not in enabled drivers build config 00:03:13.792 crypto/mvsam: not in enabled drivers build config 00:03:13.792 crypto/nitrox: not in enabled drivers build config 00:03:13.792 crypto/null: not in enabled drivers build config 00:03:13.792 crypto/octeontx: not in enabled drivers build config 00:03:13.792 crypto/openssl: not in enabled drivers build config 00:03:13.792 crypto/scheduler: not in enabled drivers build config 00:03:13.792 crypto/uadk: not in enabled drivers build config 00:03:13.792 crypto/virtio: not in enabled drivers build config 00:03:13.792 compress/isal: not in enabled drivers build config 00:03:13.792 compress/mlx5: not in enabled drivers build config 00:03:13.792 compress/nitrox: not in enabled drivers build config 00:03:13.792 compress/octeontx: not in enabled drivers build config 00:03:13.792 compress/uadk: not in enabled drivers build config 00:03:13.792 compress/zlib: not in enabled drivers build config 00:03:13.792 regex/*: missing internal dependency, "regexdev" 00:03:13.792 ml/*: missing internal dependency, "mldev" 00:03:13.792 vdpa/ifc: not in enabled drivers build config 00:03:13.792 vdpa/mlx5: not in enabled drivers build config 00:03:13.792 vdpa/nfp: not in enabled drivers build config 00:03:13.792 vdpa/sfc: not in enabled drivers build config 00:03:13.792 event/*: missing internal dependency, "eventdev" 00:03:13.792 baseband/*: missing internal dependency, "bbdev" 00:03:13.792 gpu/*: missing internal dependency, "gpudev" 00:03:13.792 00:03:13.792 00:03:13.792 Build targets in project: 85 00:03:13.792 00:03:13.792 DPDK 24.07.0 00:03:13.792 00:03:13.792 User defined options 00:03:13.792 buildtype : debug 00:03:13.792 default_library : shared 00:03:13.792 libdir : lib 00:03:13.792 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:13.792 b_sanitize : address 00:03:13.792 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:13.792 c_link_args : 00:03:13.792 cpu_instruction_set: native 00:03:13.792 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:13.792 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,ptr_compress,rawdev,regexdev,rib,sched,stack,table 00:03:13.792 enable_docs : false 00:03:13.792 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:13.792 enable_kmods : false 00:03:13.792 max_lcores : 128 00:03:13.792 tests : false 00:03:13.792 00:03:13.792 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.792 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:13.792 [1/269] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:13.792 [2/269] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:13.792 [3/269] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:13.792 [4/269] Linking static target lib/librte_log.a 00:03:13.792 [5/269] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:13.792 [6/269] Linking static target lib/librte_kvargs.a 00:03:13.792 [7/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:13.792 [8/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:13.792 [9/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:13.792 [10/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.792 [11/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:13.792 [12/269] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.792 [13/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:13.792 [14/269] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:13.792 [15/269] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:13.792 [16/269] Linking static target lib/librte_telemetry.a 00:03:13.792 [17/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:14.369 [18/269] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.369 [19/269] Linking target lib/librte_log.so.24.2 00:03:14.369 [20/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:14.369 [21/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:14.369 [22/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:14.369 [23/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:14.627 [24/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:14.627 [25/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:14.627 [26/269] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:03:14.628 [27/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:14.628 [28/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:14.628 [29/269] Linking target lib/librte_kvargs.so.24.2 00:03:14.886 [30/269] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.886 [31/269] Linking target lib/librte_telemetry.so.24.2 00:03:14.886 [32/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:14.886 [33/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:15.144 [34/269] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:03:15.144 [35/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:15.144 [36/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:15.144 [37/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:15.144 [38/269] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:15.402 [39/269] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:03:15.403 [40/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:15.403 [41/269] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:15.403 [42/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:15.403 [43/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:15.661 [44/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:15.919 [45/269] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:15.919 [46/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:15.919 [47/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:15.919 [48/269] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:15.919 [49/269] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:16.177 [50/269] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:16.177 [51/269] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:16.177 [52/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:16.177 [53/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:16.435 [54/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:16.435 [55/269] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:16.435 [56/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:16.693 [57/269] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:16.693 [58/269] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:16.951 [59/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:16.951 [60/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:16.951 [61/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:16.951 [62/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:16.951 [63/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:17.209 [64/269] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:17.209 [65/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:17.209 [66/269] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:17.468 [67/269] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:17.468 [68/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:17.468 [69/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:17.468 [70/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:17.727 [71/269] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:17.727 [72/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:17.727 [73/269] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:17.727 [74/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:17.727 [75/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:17.985 [76/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:17.985 [77/269] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:17.985 [78/269] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.985 [79/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:18.242 [80/269] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:18.242 [81/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:18.242 [82/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:18.242 [83/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:18.242 [84/269] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:18.509 [85/269] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:18.509 [86/269] Linking static target lib/librte_ring.a 00:03:18.509 [87/269] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:18.509 [88/269] Linking static target lib/librte_eal.a 00:03:18.810 [89/269] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:18.810 [90/269] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:18.810 [91/269] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:18.810 [92/269] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:19.068 [93/269] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.327 [94/269] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:19.327 [95/269] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:19.327 [96/269] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:19.327 [97/269] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:19.327 [98/269] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:19.585 [99/269] Linking static target lib/librte_rcu.a 00:03:19.585 [100/269] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:19.844 [101/269] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:19.844 [102/269] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:19.844 [103/269] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:20.102 [104/269] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:20.102 [105/269] Linking static target lib/librte_mbuf.a 00:03:20.102 [106/269] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:20.102 [107/269] Linking static target lib/librte_meter.a 00:03:20.102 [108/269] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.102 [109/269] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:20.102 [110/269] Linking static target lib/librte_mempool.a 00:03:20.360 [111/269] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:20.360 [112/269] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:20.619 [113/269] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:20.619 [114/269] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:20.619 [115/269] Linking static target lib/librte_net.a 00:03:20.877 [116/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:20.877 [117/269] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.135 [118/269] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:21.135 [119/269] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.135 [120/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:21.393 [121/269] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.651 [122/269] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:21.911 [123/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:21.911 [124/269] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.911 [125/269] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:22.267 [126/269] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:22.267 [127/269] Linking static target lib/librte_pci.a 00:03:22.267 [128/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:22.267 [129/269] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:22.267 [130/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:22.524 [131/269] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:22.524 [132/269] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:22.524 [133/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:22.782 [134/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:22.782 [135/269] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.782 [136/269] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:22.782 [137/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:22.782 [138/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:22.782 [139/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:22.782 [140/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:22.782 [141/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:22.782 [142/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:23.041 [143/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:23.041 [144/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:23.041 [145/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:23.300 [146/269] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:23.300 [147/269] Linking static target lib/librte_cmdline.a 00:03:23.300 [148/269] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:23.300 [149/269] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:23.559 [150/269] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:23.559 [151/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:23.559 [152/269] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:23.559 [153/269] Linking static target lib/librte_ethdev.a 00:03:23.818 [154/269] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:23.818 [155/269] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:24.076 [156/269] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:24.076 [157/269] Linking static target lib/librte_timer.a 00:03:24.076 [158/269] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:24.334 [159/269] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:24.901 [160/269] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:24.901 [161/269] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:24.901 [162/269] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.901 [163/269] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:24.901 [164/269] Linking static target lib/librte_hash.a 00:03:24.901 [165/269] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:24.901 [166/269] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:24.901 [167/269] Linking static target lib/librte_dmadev.a 00:03:25.216 [168/269] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:25.216 [169/269] Linking static target lib/librte_compressdev.a 00:03:25.216 [170/269] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:25.216 [171/269] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.474 [172/269] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:25.474 [173/269] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:25.474 [174/269] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:26.039 [175/269] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:26.039 [176/269] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:26.039 [177/269] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:26.298 [178/269] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.298 [179/269] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:26.298 [180/269] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.298 [181/269] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:26.557 [182/269] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:26.557 [183/269] Linking static target lib/librte_cryptodev.a 00:03:26.557 [184/269] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.815 [185/269] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:26.815 [186/269] Linking static target lib/librte_power.a 00:03:26.815 [187/269] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:27.074 [188/269] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:27.074 [189/269] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:27.074 [190/269] Linking static target lib/librte_security.a 00:03:27.333 [191/269] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:27.333 [192/269] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:27.592 [193/269] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:27.592 [194/269] Linking static target lib/librte_reorder.a 00:03:27.850 [195/269] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:28.110 [196/269] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.110 [197/269] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.368 [198/269] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.368 [199/269] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:28.368 [200/269] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:28.627 [201/269] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:28.887 [202/269] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:28.887 [203/269] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:29.146 [204/269] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:29.146 [205/269] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:29.404 [206/269] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:29.404 [207/269] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:29.404 [208/269] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:29.404 [209/269] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:29.404 [210/269] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:29.404 [211/269] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:29.404 [212/269] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.663 [213/269] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:29.663 [214/269] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.663 [215/269] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.663 [216/269] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:29.663 [217/269] Linking static target drivers/librte_bus_vdev.a 00:03:29.663 [218/269] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.663 [219/269] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.921 [220/269] Linking static target drivers/librte_bus_pci.a 00:03:29.921 [221/269] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:29.921 [222/269] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:30.180 [223/269] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.180 [224/269] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:30.180 [225/269] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:30.180 [226/269] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:30.180 [227/269] Linking static target drivers/librte_mempool_ring.a 00:03:30.484 [228/269] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.059 [229/269] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:32.439 [230/269] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.439 [231/269] Linking target lib/librte_eal.so.24.2 00:03:32.439 [232/269] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:03:32.698 [233/269] Linking target lib/librte_meter.so.24.2 00:03:32.698 [234/269] Linking target lib/librte_ring.so.24.2 00:03:32.698 [235/269] Linking target lib/librte_dmadev.so.24.2 00:03:32.698 [236/269] Linking target lib/librte_pci.so.24.2 00:03:32.698 [237/269] Linking target lib/librte_timer.so.24.2 00:03:32.698 [238/269] Linking target drivers/librte_bus_vdev.so.24.2 00:03:32.698 [239/269] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:03:32.698 [240/269] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:03:32.698 [241/269] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:03:32.698 [242/269] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:03:32.698 [243/269] Linking target drivers/librte_bus_pci.so.24.2 00:03:32.698 [244/269] Linking target lib/librte_mempool.so.24.2 00:03:32.957 [245/269] Linking target lib/librte_rcu.so.24.2 00:03:32.957 [246/269] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:03:32.957 [247/269] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:03:32.957 [248/269] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:03:32.957 [249/269] Linking target drivers/librte_mempool_ring.so.24.2 00:03:32.957 [250/269] Linking target lib/librte_mbuf.so.24.2 00:03:33.216 [251/269] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:03:33.216 [252/269] Linking target lib/librte_reorder.so.24.2 00:03:33.216 [253/269] Linking target lib/librte_net.so.24.2 00:03:33.216 [254/269] Linking target lib/librte_cryptodev.so.24.2 00:03:33.216 [255/269] Linking target lib/librte_compressdev.so.24.2 00:03:33.475 [256/269] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:03:33.475 [257/269] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:03:33.475 [258/269] Linking target lib/librte_cmdline.so.24.2 00:03:33.475 [259/269] Linking target lib/librte_hash.so.24.2 00:03:33.475 [260/269] Linking target lib/librte_security.so.24.2 00:03:33.734 [261/269] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:03:34.302 [262/269] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.302 [263/269] Linking target lib/librte_ethdev.so.24.2 00:03:34.562 [264/269] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:03:34.562 [265/269] Linking target lib/librte_power.so.24.2 00:03:36.471 [266/269] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:36.471 [267/269] Linking static target lib/librte_vhost.a 00:03:39.004 [268/269] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.004 [269/269] Linking target lib/librte_vhost.so.24.2 00:03:39.004 INFO: autodetecting backend as ninja 00:03:39.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:00.970 CC lib/ut_mock/mock.o 00:04:00.970 CC lib/log/log.o 00:04:00.970 CC lib/log/log_flags.o 00:04:00.970 CC lib/log/log_deprecated.o 00:04:00.970 CC lib/ut/ut.o 00:04:00.970 LIB libspdk_ut_mock.a 00:04:00.970 LIB libspdk_log.a 00:04:00.970 LIB libspdk_ut.a 00:04:00.970 SO libspdk_ut_mock.so.6.0 00:04:00.970 SO libspdk_log.so.7.1 00:04:00.970 SO libspdk_ut.so.2.0 00:04:00.970 SYMLINK libspdk_ut_mock.so 00:04:00.970 SYMLINK libspdk_log.so 00:04:00.970 SYMLINK libspdk_ut.so 00:04:00.970 CC lib/dma/dma.o 00:04:00.970 CXX lib/trace_parser/trace.o 00:04:00.970 CC lib/ioat/ioat.o 00:04:00.970 CC lib/util/base64.o 00:04:00.970 CC lib/util/bit_array.o 00:04:00.970 CC lib/util/cpuset.o 00:04:00.970 CC lib/util/crc32c.o 00:04:00.970 CC lib/util/crc16.o 00:04:00.970 CC lib/util/crc32.o 00:04:00.970 CC lib/vfio_user/host/vfio_user_pci.o 00:04:00.970 CC lib/util/crc32_ieee.o 00:04:00.970 CC lib/util/crc64.o 00:04:00.970 CC lib/util/dif.o 00:04:00.970 LIB libspdk_dma.a 00:04:00.970 SO libspdk_dma.so.5.0 00:04:00.970 CC lib/vfio_user/host/vfio_user.o 00:04:00.970 CC lib/util/fd.o 00:04:00.970 CC lib/util/fd_group.o 00:04:00.970 SYMLINK libspdk_dma.so 00:04:00.970 CC lib/util/file.o 00:04:00.970 CC lib/util/hexlify.o 00:04:00.970 CC lib/util/iov.o 00:04:00.970 LIB libspdk_ioat.a 00:04:00.970 SO libspdk_ioat.so.7.0 00:04:00.970 CC lib/util/math.o 00:04:00.970 CC lib/util/net.o 00:04:00.970 SYMLINK libspdk_ioat.so 00:04:00.970 CC lib/util/pipe.o 00:04:00.970 CC lib/util/strerror_tls.o 00:04:00.970 CC lib/util/string.o 00:04:00.970 LIB libspdk_vfio_user.a 00:04:00.970 CC lib/util/uuid.o 00:04:00.970 CC lib/util/xor.o 00:04:00.970 SO libspdk_vfio_user.so.5.0 00:04:00.970 CC lib/util/zipf.o 00:04:00.970 CC lib/util/md5.o 00:04:00.970 SYMLINK libspdk_vfio_user.so 00:04:00.970 LIB libspdk_util.a 00:04:00.970 SO libspdk_util.so.10.0 00:04:00.970 LIB libspdk_trace_parser.a 00:04:00.970 SYMLINK libspdk_util.so 00:04:00.970 SO libspdk_trace_parser.so.6.0 00:04:00.970 SYMLINK libspdk_trace_parser.so 00:04:00.970 CC lib/conf/conf.o 00:04:00.970 CC lib/json/json_parse.o 00:04:00.970 CC lib/vmd/vmd.o 00:04:00.970 CC lib/json/json_util.o 00:04:00.970 CC lib/vmd/led.o 00:04:00.970 CC lib/json/json_write.o 00:04:00.970 CC lib/env_dpdk/env.o 00:04:00.970 CC lib/idxd/idxd.o 00:04:00.970 CC lib/rdma_provider/common.o 00:04:00.970 CC lib/rdma_utils/rdma_utils.o 00:04:00.970 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:00.970 CC lib/env_dpdk/memory.o 00:04:00.970 LIB libspdk_conf.a 00:04:00.970 CC lib/env_dpdk/pci.o 00:04:00.970 CC lib/env_dpdk/init.o 00:04:00.970 SO libspdk_conf.so.6.0 00:04:00.970 LIB libspdk_rdma_utils.a 00:04:00.970 LIB libspdk_json.a 00:04:00.970 SYMLINK libspdk_conf.so 00:04:00.970 CC lib/idxd/idxd_user.o 00:04:00.970 SO libspdk_rdma_utils.so.1.0 00:04:00.970 SO libspdk_json.so.6.0 00:04:00.970 LIB libspdk_rdma_provider.a 00:04:00.970 SO libspdk_rdma_provider.so.6.0 00:04:00.970 SYMLINK libspdk_rdma_utils.so 00:04:00.970 CC lib/idxd/idxd_kernel.o 00:04:00.970 SYMLINK libspdk_json.so 00:04:00.970 SYMLINK libspdk_rdma_provider.so 00:04:00.970 CC lib/env_dpdk/threads.o 00:04:00.970 CC lib/env_dpdk/pci_ioat.o 00:04:00.970 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.970 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.970 CC lib/env_dpdk/pci_virtio.o 00:04:00.970 CC lib/env_dpdk/pci_vmd.o 00:04:00.970 CC lib/env_dpdk/pci_idxd.o 00:04:00.970 CC lib/env_dpdk/pci_event.o 00:04:00.970 CC lib/env_dpdk/sigbus_handler.o 00:04:00.970 LIB libspdk_idxd.a 00:04:00.970 LIB libspdk_vmd.a 00:04:00.970 SO libspdk_idxd.so.12.1 00:04:00.970 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.970 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.970 SO libspdk_vmd.so.6.0 00:04:00.970 CC lib/env_dpdk/pci_dpdk.o 00:04:00.970 SYMLINK libspdk_idxd.so 00:04:00.970 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.970 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.970 SYMLINK libspdk_vmd.so 00:04:00.970 LIB libspdk_jsonrpc.a 00:04:00.970 SO libspdk_jsonrpc.so.6.0 00:04:00.970 SYMLINK libspdk_jsonrpc.so 00:04:00.970 CC lib/rpc/rpc.o 00:04:01.229 LIB libspdk_env_dpdk.a 00:04:01.229 LIB libspdk_rpc.a 00:04:01.486 SO libspdk_rpc.so.6.0 00:04:01.486 SO libspdk_env_dpdk.so.15.0 00:04:01.486 SYMLINK libspdk_rpc.so 00:04:01.486 SYMLINK libspdk_env_dpdk.so 00:04:01.743 CC lib/trace/trace.o 00:04:01.743 CC lib/trace/trace_rpc.o 00:04:01.743 CC lib/trace/trace_flags.o 00:04:01.743 CC lib/notify/notify_rpc.o 00:04:01.743 CC lib/notify/notify.o 00:04:01.743 CC lib/keyring/keyring_rpc.o 00:04:01.743 CC lib/keyring/keyring.o 00:04:02.000 LIB libspdk_notify.a 00:04:02.000 SO libspdk_notify.so.6.0 00:04:02.000 LIB libspdk_keyring.a 00:04:02.000 LIB libspdk_trace.a 00:04:02.000 SYMLINK libspdk_notify.so 00:04:02.000 SO libspdk_keyring.so.2.0 00:04:02.259 SO libspdk_trace.so.11.0 00:04:02.259 SYMLINK libspdk_keyring.so 00:04:02.259 SYMLINK libspdk_trace.so 00:04:02.517 CC lib/thread/thread.o 00:04:02.517 CC lib/thread/iobuf.o 00:04:02.517 CC lib/sock/sock.o 00:04:02.517 CC lib/sock/sock_rpc.o 00:04:03.083 LIB libspdk_sock.a 00:04:03.083 SO libspdk_sock.so.10.0 00:04:03.377 SYMLINK libspdk_sock.so 00:04:03.671 CC lib/nvme/nvme_ctrlr.o 00:04:03.671 CC lib/nvme/nvme_ns.o 00:04:03.671 CC lib/nvme/nvme_fabric.o 00:04:03.671 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:03.671 CC lib/nvme/nvme.o 00:04:03.671 CC lib/nvme/nvme_pcie_common.o 00:04:03.671 CC lib/nvme/nvme_pcie.o 00:04:03.671 CC lib/nvme/nvme_ns_cmd.o 00:04:03.671 CC lib/nvme/nvme_qpair.o 00:04:04.606 CC lib/nvme/nvme_quirks.o 00:04:04.606 CC lib/nvme/nvme_transport.o 00:04:04.606 CC lib/nvme/nvme_discovery.o 00:04:04.606 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.606 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.606 LIB libspdk_thread.a 00:04:04.865 CC lib/nvme/nvme_tcp.o 00:04:04.865 SO libspdk_thread.so.10.2 00:04:04.865 CC lib/nvme/nvme_opal.o 00:04:04.865 SYMLINK libspdk_thread.so 00:04:04.865 CC lib/nvme/nvme_io_msg.o 00:04:04.865 CC lib/nvme/nvme_poll_group.o 00:04:05.124 CC lib/nvme/nvme_zns.o 00:04:05.124 CC lib/nvme/nvme_stubs.o 00:04:05.124 CC lib/nvme/nvme_auth.o 00:04:05.382 CC lib/nvme/nvme_cuse.o 00:04:05.382 CC lib/nvme/nvme_rdma.o 00:04:05.948 CC lib/accel/accel.o 00:04:05.948 CC lib/blob/blobstore.o 00:04:05.948 CC lib/init/json_config.o 00:04:05.948 CC lib/virtio/virtio.o 00:04:05.948 CC lib/fsdev/fsdev.o 00:04:06.206 CC lib/init/subsystem.o 00:04:06.464 CC lib/init/subsystem_rpc.o 00:04:06.464 CC lib/virtio/virtio_vhost_user.o 00:04:06.464 CC lib/virtio/virtio_vfio_user.o 00:04:06.464 CC lib/virtio/virtio_pci.o 00:04:06.464 CC lib/init/rpc.o 00:04:06.464 CC lib/blob/request.o 00:04:06.723 CC lib/accel/accel_rpc.o 00:04:06.723 LIB libspdk_init.a 00:04:06.723 SO libspdk_init.so.6.0 00:04:06.982 CC lib/blob/zeroes.o 00:04:06.982 CC lib/fsdev/fsdev_io.o 00:04:06.982 LIB libspdk_virtio.a 00:04:06.982 CC lib/fsdev/fsdev_rpc.o 00:04:06.982 SYMLINK libspdk_init.so 00:04:06.982 CC lib/accel/accel_sw.o 00:04:06.982 SO libspdk_virtio.so.7.0 00:04:06.982 SYMLINK libspdk_virtio.so 00:04:06.982 CC lib/blob/blob_bs_dev.o 00:04:07.241 LIB libspdk_nvme.a 00:04:07.241 CC lib/event/app.o 00:04:07.241 CC lib/event/app_rpc.o 00:04:07.241 CC lib/event/log_rpc.o 00:04:07.241 CC lib/event/reactor.o 00:04:07.241 CC lib/event/scheduler_static.o 00:04:07.241 LIB libspdk_fsdev.a 00:04:07.547 LIB libspdk_accel.a 00:04:07.547 SO libspdk_fsdev.so.1.0 00:04:07.547 SO libspdk_nvme.so.14.0 00:04:07.547 SO libspdk_accel.so.16.0 00:04:07.547 SYMLINK libspdk_fsdev.so 00:04:07.547 SYMLINK libspdk_accel.so 00:04:07.821 SYMLINK libspdk_nvme.so 00:04:07.821 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:07.821 CC lib/bdev/bdev_rpc.o 00:04:07.821 LIB libspdk_event.a 00:04:07.821 CC lib/bdev/bdev.o 00:04:07.821 CC lib/bdev/scsi_nvme.o 00:04:07.821 CC lib/bdev/bdev_zone.o 00:04:07.821 CC lib/bdev/part.o 00:04:08.080 SO libspdk_event.so.15.0 00:04:08.080 SYMLINK libspdk_event.so 00:04:08.648 LIB libspdk_fuse_dispatcher.a 00:04:08.649 SO libspdk_fuse_dispatcher.so.1.0 00:04:08.649 SYMLINK libspdk_fuse_dispatcher.so 00:04:10.552 LIB libspdk_blob.a 00:04:10.552 SO libspdk_blob.so.11.0 00:04:10.552 SYMLINK libspdk_blob.so 00:04:10.810 CC lib/blobfs/blobfs.o 00:04:10.810 CC lib/blobfs/tree.o 00:04:10.810 CC lib/lvol/lvol.o 00:04:11.377 LIB libspdk_bdev.a 00:04:11.377 SO libspdk_bdev.so.17.0 00:04:11.635 SYMLINK libspdk_bdev.so 00:04:11.893 CC lib/scsi/dev.o 00:04:11.893 CC lib/scsi/port.o 00:04:11.893 CC lib/scsi/lun.o 00:04:11.893 CC lib/scsi/scsi.o 00:04:11.893 CC lib/nvmf/ctrlr.o 00:04:11.893 CC lib/nbd/nbd.o 00:04:11.893 CC lib/ftl/ftl_core.o 00:04:11.893 CC lib/ublk/ublk.o 00:04:11.893 LIB libspdk_blobfs.a 00:04:12.178 SO libspdk_blobfs.so.10.0 00:04:12.178 CC lib/ftl/ftl_init.o 00:04:12.178 LIB libspdk_lvol.a 00:04:12.178 CC lib/ftl/ftl_layout.o 00:04:12.178 SO libspdk_lvol.so.10.0 00:04:12.178 SYMLINK libspdk_blobfs.so 00:04:12.178 CC lib/nbd/nbd_rpc.o 00:04:12.178 CC lib/ftl/ftl_debug.o 00:04:12.178 SYMLINK libspdk_lvol.so 00:04:12.178 CC lib/ftl/ftl_io.o 00:04:12.178 CC lib/scsi/scsi_bdev.o 00:04:12.438 CC lib/scsi/scsi_pr.o 00:04:12.438 CC lib/scsi/scsi_rpc.o 00:04:12.438 CC lib/nvmf/ctrlr_discovery.o 00:04:12.438 CC lib/nvmf/ctrlr_bdev.o 00:04:12.438 LIB libspdk_nbd.a 00:04:12.438 SO libspdk_nbd.so.7.0 00:04:12.438 CC lib/nvmf/subsystem.o 00:04:12.438 CC lib/ftl/ftl_sb.o 00:04:12.438 CC lib/ftl/ftl_l2p.o 00:04:12.695 SYMLINK libspdk_nbd.so 00:04:12.695 CC lib/scsi/task.o 00:04:12.695 CC lib/ftl/ftl_l2p_flat.o 00:04:12.695 CC lib/ublk/ublk_rpc.o 00:04:12.695 CC lib/ftl/ftl_nv_cache.o 00:04:12.695 CC lib/nvmf/nvmf.o 00:04:12.954 CC lib/ftl/ftl_band.o 00:04:12.954 LIB libspdk_scsi.a 00:04:12.954 LIB libspdk_ublk.a 00:04:12.954 CC lib/ftl/ftl_band_ops.o 00:04:12.954 SO libspdk_ublk.so.3.0 00:04:12.954 SO libspdk_scsi.so.9.0 00:04:12.954 SYMLINK libspdk_ublk.so 00:04:13.212 CC lib/nvmf/nvmf_rpc.o 00:04:13.212 SYMLINK libspdk_scsi.so 00:04:13.212 CC lib/nvmf/transport.o 00:04:13.212 CC lib/nvmf/tcp.o 00:04:13.470 CC lib/nvmf/stubs.o 00:04:13.470 CC lib/nvmf/mdns_server.o 00:04:13.728 CC lib/iscsi/conn.o 00:04:13.985 CC lib/nvmf/rdma.o 00:04:13.985 CC lib/nvmf/auth.o 00:04:13.986 CC lib/iscsi/init_grp.o 00:04:14.243 CC lib/ftl/ftl_writer.o 00:04:14.243 CC lib/iscsi/iscsi.o 00:04:14.243 CC lib/ftl/ftl_rq.o 00:04:14.501 CC lib/vhost/vhost.o 00:04:14.501 CC lib/vhost/vhost_rpc.o 00:04:14.501 CC lib/iscsi/param.o 00:04:14.502 CC lib/iscsi/portal_grp.o 00:04:14.502 CC lib/ftl/ftl_reloc.o 00:04:14.502 CC lib/ftl/ftl_l2p_cache.o 00:04:14.760 CC lib/iscsi/tgt_node.o 00:04:15.018 CC lib/iscsi/iscsi_subsystem.o 00:04:15.018 CC lib/iscsi/iscsi_rpc.o 00:04:15.276 CC lib/iscsi/task.o 00:04:15.276 CC lib/ftl/ftl_p2l.o 00:04:15.276 CC lib/vhost/vhost_scsi.o 00:04:15.276 CC lib/ftl/ftl_p2l_log.o 00:04:15.535 CC lib/ftl/mngt/ftl_mngt.o 00:04:15.535 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:15.535 CC lib/vhost/vhost_blk.o 00:04:15.535 CC lib/vhost/rte_vhost_user.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:15.793 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:16.051 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:16.051 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:16.051 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:16.051 LIB libspdk_iscsi.a 00:04:16.310 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:16.310 SO libspdk_iscsi.so.8.0 00:04:16.310 CC lib/ftl/utils/ftl_conf.o 00:04:16.310 CC lib/ftl/utils/ftl_md.o 00:04:16.310 CC lib/ftl/utils/ftl_mempool.o 00:04:16.310 CC lib/ftl/utils/ftl_bitmap.o 00:04:16.310 CC lib/ftl/utils/ftl_property.o 00:04:16.570 SYMLINK libspdk_iscsi.so 00:04:16.570 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:16.570 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:16.570 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:16.570 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:16.570 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:16.570 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:16.570 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:16.830 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:16.830 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:16.830 LIB libspdk_nvmf.a 00:04:16.830 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:16.830 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:16.830 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:16.830 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:16.830 LIB libspdk_vhost.a 00:04:16.830 CC lib/ftl/base/ftl_base_dev.o 00:04:16.830 CC lib/ftl/base/ftl_base_bdev.o 00:04:17.089 SO libspdk_vhost.so.8.0 00:04:17.089 CC lib/ftl/ftl_trace.o 00:04:17.089 SO libspdk_nvmf.so.19.0 00:04:17.089 SYMLINK libspdk_vhost.so 00:04:17.348 SYMLINK libspdk_nvmf.so 00:04:17.348 LIB libspdk_ftl.a 00:04:17.608 SO libspdk_ftl.so.9.0 00:04:18.177 SYMLINK libspdk_ftl.so 00:04:18.437 CC module/env_dpdk/env_dpdk_rpc.o 00:04:18.696 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:18.696 CC module/keyring/file/keyring.o 00:04:18.696 CC module/fsdev/aio/fsdev_aio.o 00:04:18.696 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.696 CC module/blob/bdev/blob_bdev.o 00:04:18.696 CC module/keyring/linux/keyring.o 00:04:18.696 CC module/accel/error/accel_error.o 00:04:18.696 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:18.696 CC module/sock/posix/posix.o 00:04:18.696 LIB libspdk_env_dpdk_rpc.a 00:04:18.696 SO libspdk_env_dpdk_rpc.so.6.0 00:04:18.696 LIB libspdk_scheduler_gscheduler.a 00:04:18.696 SYMLINK libspdk_env_dpdk_rpc.so 00:04:18.697 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:18.697 CC module/keyring/linux/keyring_rpc.o 00:04:18.697 LIB libspdk_scheduler_dpdk_governor.a 00:04:18.697 CC module/accel/error/accel_error_rpc.o 00:04:18.697 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.697 CC module/keyring/file/keyring_rpc.o 00:04:18.956 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:18.956 LIB libspdk_scheduler_dynamic.a 00:04:18.956 SO libspdk_scheduler_dynamic.so.4.0 00:04:18.956 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.956 CC module/fsdev/aio/linux_aio_mgr.o 00:04:18.956 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:18.956 LIB libspdk_blob_bdev.a 00:04:18.956 SYMLINK libspdk_scheduler_dynamic.so 00:04:18.956 LIB libspdk_keyring_linux.a 00:04:18.956 SO libspdk_blob_bdev.so.11.0 00:04:18.956 LIB libspdk_keyring_file.a 00:04:18.956 LIB libspdk_accel_error.a 00:04:18.956 SO libspdk_keyring_linux.so.1.0 00:04:18.956 SO libspdk_keyring_file.so.2.0 00:04:18.956 SO libspdk_accel_error.so.2.0 00:04:18.956 SYMLINK libspdk_blob_bdev.so 00:04:18.956 SYMLINK libspdk_keyring_linux.so 00:04:19.214 SYMLINK libspdk_keyring_file.so 00:04:19.214 SYMLINK libspdk_accel_error.so 00:04:19.214 CC module/accel/dsa/accel_dsa.o 00:04:19.214 CC module/accel/dsa/accel_dsa_rpc.o 00:04:19.214 CC module/accel/ioat/accel_ioat.o 00:04:19.214 CC module/accel/iaa/accel_iaa.o 00:04:19.214 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.474 CC module/bdev/delay/vbdev_delay.o 00:04:19.474 CC module/bdev/error/vbdev_error.o 00:04:19.474 CC module/bdev/gpt/gpt.o 00:04:19.474 CC module/accel/ioat/accel_ioat_rpc.o 00:04:19.474 CC module/blobfs/bdev/blobfs_bdev.o 00:04:19.474 CC module/bdev/gpt/vbdev_gpt.o 00:04:19.474 LIB libspdk_accel_dsa.a 00:04:19.474 LIB libspdk_fsdev_aio.a 00:04:19.474 SO libspdk_accel_dsa.so.5.0 00:04:19.474 LIB libspdk_accel_iaa.a 00:04:19.474 LIB libspdk_accel_ioat.a 00:04:19.474 SO libspdk_fsdev_aio.so.1.0 00:04:19.474 SO libspdk_accel_iaa.so.3.0 00:04:19.732 SO libspdk_accel_ioat.so.6.0 00:04:19.732 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:19.732 LIB libspdk_sock_posix.a 00:04:19.732 SYMLINK libspdk_accel_dsa.so 00:04:19.732 SYMLINK libspdk_accel_iaa.so 00:04:19.732 SYMLINK libspdk_fsdev_aio.so 00:04:19.732 SO libspdk_sock_posix.so.6.0 00:04:19.732 CC module/bdev/error/vbdev_error_rpc.o 00:04:19.732 SYMLINK libspdk_accel_ioat.so 00:04:19.732 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:19.732 SYMLINK libspdk_sock_posix.so 00:04:19.732 CC module/bdev/lvol/vbdev_lvol.o 00:04:19.732 CC module/bdev/null/bdev_null.o 00:04:19.732 CC module/bdev/malloc/bdev_malloc.o 00:04:19.732 LIB libspdk_blobfs_bdev.a 00:04:19.732 CC module/bdev/null/bdev_null_rpc.o 00:04:19.732 LIB libspdk_bdev_gpt.a 00:04:19.991 SO libspdk_blobfs_bdev.so.6.0 00:04:19.991 LIB libspdk_bdev_error.a 00:04:19.991 SO libspdk_bdev_gpt.so.6.0 00:04:19.991 LIB libspdk_bdev_delay.a 00:04:19.991 CC module/bdev/nvme/bdev_nvme.o 00:04:19.991 SO libspdk_bdev_error.so.6.0 00:04:19.991 SO libspdk_bdev_delay.so.6.0 00:04:19.991 SYMLINK libspdk_blobfs_bdev.so 00:04:19.991 CC module/bdev/passthru/vbdev_passthru.o 00:04:19.991 SYMLINK libspdk_bdev_gpt.so 00:04:19.991 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:19.991 CC module/bdev/nvme/nvme_rpc.o 00:04:19.991 SYMLINK libspdk_bdev_error.so 00:04:19.991 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.991 SYMLINK libspdk_bdev_delay.so 00:04:19.991 CC module/bdev/nvme/vbdev_opal.o 00:04:19.991 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:20.249 LIB libspdk_bdev_null.a 00:04:20.249 SO libspdk_bdev_null.so.6.0 00:04:20.249 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:20.249 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:20.249 SYMLINK libspdk_bdev_null.so 00:04:20.249 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:20.249 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:20.508 CC module/bdev/raid/bdev_raid.o 00:04:20.508 LIB libspdk_bdev_malloc.a 00:04:20.508 LIB libspdk_bdev_passthru.a 00:04:20.508 CC module/bdev/split/vbdev_split.o 00:04:20.508 SO libspdk_bdev_malloc.so.6.0 00:04:20.508 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.508 SO libspdk_bdev_passthru.so.6.0 00:04:20.508 SYMLINK libspdk_bdev_malloc.so 00:04:20.766 SYMLINK libspdk_bdev_passthru.so 00:04:20.766 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.766 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.766 CC module/bdev/aio/bdev_aio.o 00:04:20.766 CC module/bdev/ftl/bdev_ftl.o 00:04:20.766 LIB libspdk_bdev_lvol.a 00:04:20.766 SO libspdk_bdev_lvol.so.6.0 00:04:20.766 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.766 LIB libspdk_bdev_split.a 00:04:20.766 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.766 SYMLINK libspdk_bdev_lvol.so 00:04:20.766 SO libspdk_bdev_split.so.6.0 00:04:21.024 SYMLINK libspdk_bdev_split.so 00:04:21.024 CC module/bdev/raid/bdev_raid_rpc.o 00:04:21.024 LIB libspdk_bdev_zone_block.a 00:04:21.024 CC module/bdev/raid/bdev_raid_sb.o 00:04:21.024 CC module/bdev/raid/raid0.o 00:04:21.024 SO libspdk_bdev_zone_block.so.6.0 00:04:21.024 LIB libspdk_bdev_aio.a 00:04:21.024 CC module/bdev/iscsi/bdev_iscsi.o 00:04:21.024 LIB libspdk_bdev_ftl.a 00:04:21.024 SO libspdk_bdev_aio.so.6.0 00:04:21.024 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:21.024 SYMLINK libspdk_bdev_zone_block.so 00:04:21.024 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:21.283 SO libspdk_bdev_ftl.so.6.0 00:04:21.283 SYMLINK libspdk_bdev_aio.so 00:04:21.283 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:21.283 SYMLINK libspdk_bdev_ftl.so 00:04:21.283 CC module/bdev/raid/raid1.o 00:04:21.283 CC module/bdev/raid/concat.o 00:04:21.283 CC module/bdev/raid/raid5f.o 00:04:21.283 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:21.542 LIB libspdk_bdev_iscsi.a 00:04:21.542 SO libspdk_bdev_iscsi.so.6.0 00:04:21.800 SYMLINK libspdk_bdev_iscsi.so 00:04:21.800 LIB libspdk_bdev_virtio.a 00:04:21.800 SO libspdk_bdev_virtio.so.6.0 00:04:22.058 SYMLINK libspdk_bdev_virtio.so 00:04:22.058 LIB libspdk_bdev_raid.a 00:04:22.058 SO libspdk_bdev_raid.so.6.0 00:04:22.334 SYMLINK libspdk_bdev_raid.so 00:04:23.288 LIB libspdk_bdev_nvme.a 00:04:23.288 SO libspdk_bdev_nvme.so.7.0 00:04:23.288 SYMLINK libspdk_bdev_nvme.so 00:04:24.224 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:24.224 CC module/event/subsystems/vmd/vmd.o 00:04:24.224 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:24.224 CC module/event/subsystems/sock/sock.o 00:04:24.224 CC module/event/subsystems/fsdev/fsdev.o 00:04:24.224 CC module/event/subsystems/keyring/keyring.o 00:04:24.224 CC module/event/subsystems/iobuf/iobuf.o 00:04:24.224 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:24.224 CC module/event/subsystems/scheduler/scheduler.o 00:04:24.224 LIB libspdk_event_sock.a 00:04:24.224 LIB libspdk_event_vmd.a 00:04:24.224 LIB libspdk_event_fsdev.a 00:04:24.224 LIB libspdk_event_vhost_blk.a 00:04:24.224 SO libspdk_event_sock.so.5.0 00:04:24.224 LIB libspdk_event_keyring.a 00:04:24.224 LIB libspdk_event_iobuf.a 00:04:24.224 SO libspdk_event_fsdev.so.1.0 00:04:24.224 SO libspdk_event_vhost_blk.so.3.0 00:04:24.224 SO libspdk_event_keyring.so.1.0 00:04:24.224 LIB libspdk_event_scheduler.a 00:04:24.224 SO libspdk_event_vmd.so.6.0 00:04:24.484 SYMLINK libspdk_event_sock.so 00:04:24.484 SO libspdk_event_scheduler.so.4.0 00:04:24.484 SO libspdk_event_iobuf.so.3.0 00:04:24.484 SYMLINK libspdk_event_fsdev.so 00:04:24.484 SYMLINK libspdk_event_keyring.so 00:04:24.484 SYMLINK libspdk_event_vhost_blk.so 00:04:24.484 SYMLINK libspdk_event_vmd.so 00:04:24.484 SYMLINK libspdk_event_scheduler.so 00:04:24.484 SYMLINK libspdk_event_iobuf.so 00:04:24.743 CC module/event/subsystems/accel/accel.o 00:04:25.001 LIB libspdk_event_accel.a 00:04:25.001 SO libspdk_event_accel.so.6.0 00:04:25.001 SYMLINK libspdk_event_accel.so 00:04:25.605 CC module/event/subsystems/bdev/bdev.o 00:04:25.605 LIB libspdk_event_bdev.a 00:04:25.863 SO libspdk_event_bdev.so.6.0 00:04:25.863 SYMLINK libspdk_event_bdev.so 00:04:26.123 CC module/event/subsystems/scsi/scsi.o 00:04:26.123 CC module/event/subsystems/nbd/nbd.o 00:04:26.123 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:26.123 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:26.123 CC module/event/subsystems/ublk/ublk.o 00:04:26.382 LIB libspdk_event_nbd.a 00:04:26.382 LIB libspdk_event_scsi.a 00:04:26.382 LIB libspdk_event_ublk.a 00:04:26.382 SO libspdk_event_nbd.so.6.0 00:04:26.382 SO libspdk_event_scsi.so.6.0 00:04:26.382 SO libspdk_event_ublk.so.3.0 00:04:26.382 SYMLINK libspdk_event_scsi.so 00:04:26.382 SYMLINK libspdk_event_nbd.so 00:04:26.382 SYMLINK libspdk_event_ublk.so 00:04:26.382 LIB libspdk_event_nvmf.a 00:04:26.382 SO libspdk_event_nvmf.so.6.0 00:04:26.641 SYMLINK libspdk_event_nvmf.so 00:04:26.641 CC module/event/subsystems/iscsi/iscsi.o 00:04:26.641 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:26.900 LIB libspdk_event_vhost_scsi.a 00:04:26.900 SO libspdk_event_vhost_scsi.so.3.0 00:04:26.900 LIB libspdk_event_iscsi.a 00:04:26.900 SO libspdk_event_iscsi.so.6.0 00:04:26.900 SYMLINK libspdk_event_vhost_scsi.so 00:04:27.159 SYMLINK libspdk_event_iscsi.so 00:04:27.159 SO libspdk.so.6.0 00:04:27.159 SYMLINK libspdk.so 00:04:27.419 CXX app/trace/trace.o 00:04:27.419 CC app/trace_record/trace_record.o 00:04:27.419 CC test/rpc_client/rpc_client_test.o 00:04:27.419 TEST_HEADER include/spdk/accel.h 00:04:27.419 TEST_HEADER include/spdk/accel_module.h 00:04:27.419 TEST_HEADER include/spdk/assert.h 00:04:27.419 TEST_HEADER include/spdk/barrier.h 00:04:27.419 TEST_HEADER include/spdk/base64.h 00:04:27.419 TEST_HEADER include/spdk/bdev.h 00:04:27.419 TEST_HEADER include/spdk/bdev_module.h 00:04:27.419 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.419 TEST_HEADER include/spdk/bit_array.h 00:04:27.419 TEST_HEADER include/spdk/bit_pool.h 00:04:27.677 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.677 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.677 TEST_HEADER include/spdk/blobfs.h 00:04:27.677 TEST_HEADER include/spdk/blob.h 00:04:27.677 TEST_HEADER include/spdk/conf.h 00:04:27.677 TEST_HEADER include/spdk/config.h 00:04:27.677 TEST_HEADER include/spdk/cpuset.h 00:04:27.677 CC app/nvmf_tgt/nvmf_main.o 00:04:27.677 TEST_HEADER include/spdk/crc16.h 00:04:27.677 TEST_HEADER include/spdk/crc32.h 00:04:27.677 TEST_HEADER include/spdk/crc64.h 00:04:27.677 TEST_HEADER include/spdk/dif.h 00:04:27.677 TEST_HEADER include/spdk/dma.h 00:04:27.677 TEST_HEADER include/spdk/endian.h 00:04:27.677 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.677 TEST_HEADER include/spdk/env.h 00:04:27.677 TEST_HEADER include/spdk/event.h 00:04:27.677 TEST_HEADER include/spdk/fd_group.h 00:04:27.677 TEST_HEADER include/spdk/fd.h 00:04:27.677 TEST_HEADER include/spdk/file.h 00:04:27.677 TEST_HEADER include/spdk/fsdev.h 00:04:27.677 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.677 TEST_HEADER include/spdk/ftl.h 00:04:27.677 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:27.677 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.677 TEST_HEADER include/spdk/hexlify.h 00:04:27.677 CC test/thread/poller_perf/poller_perf.o 00:04:27.677 TEST_HEADER include/spdk/histogram_data.h 00:04:27.677 TEST_HEADER include/spdk/idxd.h 00:04:27.677 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.677 TEST_HEADER include/spdk/init.h 00:04:27.677 TEST_HEADER include/spdk/ioat.h 00:04:27.677 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.677 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.677 TEST_HEADER include/spdk/json.h 00:04:27.677 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.677 CC examples/util/zipf/zipf.o 00:04:27.677 TEST_HEADER include/spdk/keyring.h 00:04:27.677 CC test/app/bdev_svc/bdev_svc.o 00:04:27.678 TEST_HEADER include/spdk/keyring_module.h 00:04:27.678 TEST_HEADER include/spdk/likely.h 00:04:27.678 TEST_HEADER include/spdk/log.h 00:04:27.678 TEST_HEADER include/spdk/lvol.h 00:04:27.678 TEST_HEADER include/spdk/md5.h 00:04:27.678 TEST_HEADER include/spdk/memory.h 00:04:27.678 TEST_HEADER include/spdk/mmio.h 00:04:27.678 TEST_HEADER include/spdk/nbd.h 00:04:27.678 TEST_HEADER include/spdk/net.h 00:04:27.678 TEST_HEADER include/spdk/notify.h 00:04:27.678 TEST_HEADER include/spdk/nvme.h 00:04:27.678 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.678 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.678 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.678 CC test/dma/test_dma/test_dma.o 00:04:27.678 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.678 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.678 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.678 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.678 TEST_HEADER include/spdk/nvmf.h 00:04:27.678 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.678 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.678 TEST_HEADER include/spdk/opal.h 00:04:27.678 TEST_HEADER include/spdk/opal_spec.h 00:04:27.678 TEST_HEADER include/spdk/pci_ids.h 00:04:27.678 TEST_HEADER include/spdk/pipe.h 00:04:27.678 TEST_HEADER include/spdk/queue.h 00:04:27.678 TEST_HEADER include/spdk/reduce.h 00:04:27.678 TEST_HEADER include/spdk/rpc.h 00:04:27.678 TEST_HEADER include/spdk/scheduler.h 00:04:27.678 TEST_HEADER include/spdk/scsi.h 00:04:27.678 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.678 LINK rpc_client_test 00:04:27.678 TEST_HEADER include/spdk/sock.h 00:04:27.678 TEST_HEADER include/spdk/stdinc.h 00:04:27.678 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.678 TEST_HEADER include/spdk/string.h 00:04:27.678 TEST_HEADER include/spdk/thread.h 00:04:27.937 TEST_HEADER include/spdk/trace.h 00:04:27.937 TEST_HEADER include/spdk/trace_parser.h 00:04:27.937 TEST_HEADER include/spdk/tree.h 00:04:27.937 TEST_HEADER include/spdk/ublk.h 00:04:27.937 LINK poller_perf 00:04:27.937 TEST_HEADER include/spdk/util.h 00:04:27.937 TEST_HEADER include/spdk/uuid.h 00:04:27.937 TEST_HEADER include/spdk/version.h 00:04:27.937 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.937 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.937 TEST_HEADER include/spdk/vhost.h 00:04:27.937 TEST_HEADER include/spdk/vmd.h 00:04:27.937 TEST_HEADER include/spdk/xor.h 00:04:27.937 TEST_HEADER include/spdk/zipf.h 00:04:27.937 CXX test/cpp_headers/accel.o 00:04:27.937 LINK nvmf_tgt 00:04:27.937 LINK spdk_trace_record 00:04:27.937 LINK zipf 00:04:27.937 LINK bdev_svc 00:04:27.937 CXX test/cpp_headers/accel_module.o 00:04:27.937 CC test/env/vtophys/vtophys.o 00:04:28.196 CXX test/cpp_headers/assert.o 00:04:28.196 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.196 CXX test/cpp_headers/barrier.o 00:04:28.196 LINK spdk_trace 00:04:28.196 LINK vtophys 00:04:28.455 CC test/app/histogram_perf/histogram_perf.o 00:04:28.455 LINK env_dpdk_post_init 00:04:28.455 CC examples/ioat/perf/perf.o 00:04:28.455 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:28.455 CXX test/cpp_headers/base64.o 00:04:28.455 LINK test_dma 00:04:28.455 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:28.455 LINK histogram_perf 00:04:28.713 CXX test/cpp_headers/bdev.o 00:04:28.713 CC app/iscsi_tgt/iscsi_tgt.o 00:04:28.713 LINK mem_callbacks 00:04:28.713 CC app/spdk_tgt/spdk_tgt.o 00:04:28.713 CC test/env/memory/memory_ut.o 00:04:28.713 LINK ioat_perf 00:04:28.713 CXX test/cpp_headers/bdev_module.o 00:04:28.713 CC test/env/pci/pci_ut.o 00:04:28.713 LINK iscsi_tgt 00:04:28.972 LINK nvme_fuzz 00:04:28.972 LINK spdk_tgt 00:04:28.972 CC test/event/event_perf/event_perf.o 00:04:28.972 CC test/nvme/aer/aer.o 00:04:28.972 CXX test/cpp_headers/bdev_zone.o 00:04:29.230 LINK event_perf 00:04:29.230 CC examples/ioat/verify/verify.o 00:04:29.230 CC app/spdk_lspci/spdk_lspci.o 00:04:29.230 CXX test/cpp_headers/bit_array.o 00:04:29.230 LINK pci_ut 00:04:29.490 LINK spdk_lspci 00:04:29.490 LINK aer 00:04:29.490 CC test/event/reactor/reactor.o 00:04:29.490 CC test/accel/dif/dif.o 00:04:29.490 LINK verify 00:04:29.490 CXX test/cpp_headers/bit_pool.o 00:04:29.490 CC test/blobfs/mkfs/mkfs.o 00:04:29.749 CC app/spdk_nvme_perf/perf.o 00:04:29.749 LINK reactor 00:04:29.749 CXX test/cpp_headers/blob_bdev.o 00:04:29.749 CC test/nvme/reset/reset.o 00:04:29.749 LINK mkfs 00:04:30.008 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.008 CC examples/vmd/led/led.o 00:04:30.008 CXX test/cpp_headers/blobfs_bdev.o 00:04:30.008 CC test/event/reactor_perf/reactor_perf.o 00:04:30.008 LINK memory_ut 00:04:30.008 LINK lsvmd 00:04:30.008 LINK led 00:04:30.008 CXX test/cpp_headers/blobfs.o 00:04:30.266 LINK reset 00:04:30.266 LINK reactor_perf 00:04:30.266 CXX test/cpp_headers/blob.o 00:04:30.266 CC test/app/jsoncat/jsoncat.o 00:04:30.266 CXX test/cpp_headers/conf.o 00:04:30.524 CC test/nvme/sgl/sgl.o 00:04:30.524 LINK jsoncat 00:04:30.524 CC app/spdk_nvme_identify/identify.o 00:04:30.524 CC test/event/app_repeat/app_repeat.o 00:04:30.524 LINK dif 00:04:30.524 CC examples/idxd/perf/perf.o 00:04:30.524 CXX test/cpp_headers/config.o 00:04:30.524 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.524 CXX test/cpp_headers/cpuset.o 00:04:30.782 LINK app_repeat 00:04:30.782 LINK spdk_nvme_perf 00:04:30.782 CC app/spdk_top/spdk_top.o 00:04:30.782 CXX test/cpp_headers/crc16.o 00:04:30.782 LINK sgl 00:04:30.782 LINK spdk_nvme_discover 00:04:30.782 LINK iscsi_fuzz 00:04:31.040 CC test/app/stub/stub.o 00:04:31.040 CXX test/cpp_headers/crc32.o 00:04:31.040 LINK idxd_perf 00:04:31.040 CC test/event/scheduler/scheduler.o 00:04:31.040 CXX test/cpp_headers/crc64.o 00:04:31.040 CC test/nvme/e2edp/nvme_dp.o 00:04:31.040 LINK stub 00:04:31.040 CC app/vhost/vhost.o 00:04:31.298 CXX test/cpp_headers/dif.o 00:04:31.298 CC app/spdk_dd/spdk_dd.o 00:04:31.298 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:31.298 LINK vhost 00:04:31.298 LINK scheduler 00:04:31.298 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.298 CXX test/cpp_headers/dma.o 00:04:31.557 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:31.557 LINK nvme_dp 00:04:31.557 CXX test/cpp_headers/endian.o 00:04:31.557 CXX test/cpp_headers/env_dpdk.o 00:04:31.557 LINK interrupt_tgt 00:04:31.557 CC app/fio/nvme/fio_plugin.o 00:04:31.815 CC app/fio/bdev/fio_plugin.o 00:04:31.815 LINK spdk_dd 00:04:31.815 LINK spdk_nvme_identify 00:04:31.815 CXX test/cpp_headers/env.o 00:04:31.815 CC test/nvme/overhead/overhead.o 00:04:31.815 CXX test/cpp_headers/event.o 00:04:32.073 CXX test/cpp_headers/fd_group.o 00:04:32.073 LINK vhost_fuzz 00:04:32.073 CXX test/cpp_headers/fd.o 00:04:32.073 CXX test/cpp_headers/file.o 00:04:32.073 CC examples/thread/thread/thread_ex.o 00:04:32.073 LINK spdk_top 00:04:32.073 CXX test/cpp_headers/fsdev.o 00:04:32.073 CXX test/cpp_headers/fsdev_module.o 00:04:32.073 CXX test/cpp_headers/ftl.o 00:04:32.073 LINK overhead 00:04:32.332 CC test/nvme/err_injection/err_injection.o 00:04:32.332 LINK thread 00:04:32.332 CXX test/cpp_headers/fuse_dispatcher.o 00:04:32.332 LINK spdk_bdev 00:04:32.332 CC test/nvme/startup/startup.o 00:04:32.332 CXX test/cpp_headers/gpt_spec.o 00:04:32.332 CXX test/cpp_headers/hexlify.o 00:04:32.332 LINK spdk_nvme 00:04:32.332 CC test/lvol/esnap/esnap.o 00:04:32.332 CC examples/sock/hello_world/hello_sock.o 00:04:32.645 LINK err_injection 00:04:32.645 CXX test/cpp_headers/histogram_data.o 00:04:32.645 LINK startup 00:04:32.645 CXX test/cpp_headers/idxd.o 00:04:32.645 CC test/nvme/reserve/reserve.o 00:04:32.645 CC test/nvme/simple_copy/simple_copy.o 00:04:32.645 CC test/nvme/connect_stress/connect_stress.o 00:04:32.645 CC test/nvme/boot_partition/boot_partition.o 00:04:32.917 LINK hello_sock 00:04:32.917 CXX test/cpp_headers/idxd_spec.o 00:04:32.917 LINK connect_stress 00:04:32.917 CC examples/accel/perf/accel_perf.o 00:04:32.917 LINK boot_partition 00:04:32.917 CC test/nvme/compliance/nvme_compliance.o 00:04:32.917 LINK reserve 00:04:32.917 LINK simple_copy 00:04:32.917 CC examples/blob/hello_world/hello_blob.o 00:04:32.917 CXX test/cpp_headers/init.o 00:04:33.176 CXX test/cpp_headers/ioat.o 00:04:33.176 CC examples/blob/cli/blobcli.o 00:04:33.176 CXX test/cpp_headers/ioat_spec.o 00:04:33.176 CXX test/cpp_headers/iscsi_spec.o 00:04:33.176 CXX test/cpp_headers/json.o 00:04:33.176 LINK hello_blob 00:04:33.176 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.435 CXX test/cpp_headers/jsonrpc.o 00:04:33.435 CXX test/cpp_headers/keyring.o 00:04:33.435 LINK nvme_compliance 00:04:33.435 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:33.435 CXX test/cpp_headers/keyring_module.o 00:04:33.435 LINK fused_ordering 00:04:33.693 LINK accel_perf 00:04:33.693 CC test/bdev/bdevio/bdevio.o 00:04:33.693 LINK doorbell_aers 00:04:33.693 CC examples/nvme/hello_world/hello_world.o 00:04:33.693 CC test/nvme/fdp/fdp.o 00:04:33.693 CXX test/cpp_headers/likely.o 00:04:33.693 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:33.693 CXX test/cpp_headers/log.o 00:04:33.693 LINK blobcli 00:04:33.951 CC examples/nvme/reconnect/reconnect.o 00:04:33.951 CXX test/cpp_headers/lvol.o 00:04:33.951 LINK hello_world 00:04:33.951 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.951 CC test/nvme/cuse/cuse.o 00:04:33.951 LINK hello_fsdev 00:04:33.951 CXX test/cpp_headers/md5.o 00:04:33.952 LINK fdp 00:04:34.210 LINK bdevio 00:04:34.210 CC examples/bdev/hello_world/hello_bdev.o 00:04:34.210 CC examples/nvme/arbitration/arbitration.o 00:04:34.210 LINK reconnect 00:04:34.210 CC examples/nvme/hotplug/hotplug.o 00:04:34.210 CXX test/cpp_headers/memory.o 00:04:34.470 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:34.470 CC examples/nvme/abort/abort.o 00:04:34.470 LINK hello_bdev 00:04:34.470 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:34.470 CXX test/cpp_headers/mmio.o 00:04:34.470 LINK nvme_manage 00:04:34.470 LINK cmb_copy 00:04:34.470 LINK hotplug 00:04:34.729 LINK arbitration 00:04:34.729 LINK pmr_persistence 00:04:34.729 CXX test/cpp_headers/nbd.o 00:04:34.729 CXX test/cpp_headers/net.o 00:04:34.729 CXX test/cpp_headers/notify.o 00:04:34.729 LINK abort 00:04:34.729 CXX test/cpp_headers/nvme.o 00:04:34.729 CXX test/cpp_headers/nvme_intel.o 00:04:34.987 CXX test/cpp_headers/nvme_ocssd.o 00:04:34.987 CC examples/bdev/bdevperf/bdevperf.o 00:04:34.987 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:34.987 CXX test/cpp_headers/nvme_spec.o 00:04:34.987 CXX test/cpp_headers/nvme_zns.o 00:04:34.987 CXX test/cpp_headers/nvmf_cmd.o 00:04:34.987 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:34.987 CXX test/cpp_headers/nvmf.o 00:04:34.987 CXX test/cpp_headers/nvmf_spec.o 00:04:34.987 CXX test/cpp_headers/nvmf_transport.o 00:04:34.987 CXX test/cpp_headers/opal.o 00:04:35.246 CXX test/cpp_headers/opal_spec.o 00:04:35.246 CXX test/cpp_headers/pci_ids.o 00:04:35.246 CXX test/cpp_headers/pipe.o 00:04:35.246 CXX test/cpp_headers/queue.o 00:04:35.246 CXX test/cpp_headers/reduce.o 00:04:35.246 CXX test/cpp_headers/rpc.o 00:04:35.246 CXX test/cpp_headers/scheduler.o 00:04:35.246 CXX test/cpp_headers/scsi.o 00:04:35.246 CXX test/cpp_headers/scsi_spec.o 00:04:35.246 CXX test/cpp_headers/sock.o 00:04:35.246 CXX test/cpp_headers/stdinc.o 00:04:35.505 CXX test/cpp_headers/string.o 00:04:35.505 CXX test/cpp_headers/thread.o 00:04:35.505 CXX test/cpp_headers/trace.o 00:04:35.505 LINK cuse 00:04:35.505 CXX test/cpp_headers/trace_parser.o 00:04:35.505 CXX test/cpp_headers/tree.o 00:04:35.505 CXX test/cpp_headers/ublk.o 00:04:35.505 CXX test/cpp_headers/util.o 00:04:35.505 CXX test/cpp_headers/uuid.o 00:04:35.505 CXX test/cpp_headers/version.o 00:04:35.505 CXX test/cpp_headers/vfio_user_pci.o 00:04:35.764 CXX test/cpp_headers/vfio_user_spec.o 00:04:35.764 CXX test/cpp_headers/vhost.o 00:04:35.764 CXX test/cpp_headers/vmd.o 00:04:35.764 CXX test/cpp_headers/xor.o 00:04:35.764 CXX test/cpp_headers/zipf.o 00:04:36.021 LINK bdevperf 00:04:36.587 CC examples/nvmf/nvmf/nvmf.o 00:04:37.193 LINK nvmf 00:04:39.779 LINK esnap 00:04:39.779 00:04:39.779 real 1m41.756s 00:04:39.779 user 9m23.178s 00:04:39.779 sys 2m0.467s 00:04:39.779 09:38:24 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:39.779 09:38:24 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.779 ************************************ 00:04:39.779 END TEST make 00:04:39.779 ************************************ 00:04:39.779 09:38:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.779 09:38:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.779 09:38:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.779 09:38:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.779 09:38:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.779 09:38:24 -- pm/common@44 -- $ pid=5460 00:04:39.779 09:38:24 -- pm/common@50 -- $ kill -TERM 5460 00:04:39.779 09:38:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.779 09:38:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.779 09:38:24 -- pm/common@44 -- $ pid=5462 00:04:39.779 09:38:24 -- pm/common@50 -- $ kill -TERM 5462 00:04:40.037 09:38:24 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.037 09:38:24 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.037 09:38:24 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.037 09:38:24 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.037 09:38:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.037 09:38:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.037 09:38:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.037 09:38:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.037 09:38:24 -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.037 09:38:24 -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.037 09:38:24 -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.037 09:38:24 -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.037 09:38:24 -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.037 09:38:24 -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.037 09:38:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.037 09:38:24 -- scripts/common.sh@344 -- # case "$op" in 00:04:40.037 09:38:24 -- scripts/common.sh@345 -- # : 1 00:04:40.037 09:38:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.037 09:38:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.037 09:38:24 -- scripts/common.sh@365 -- # decimal 1 00:04:40.037 09:38:24 -- scripts/common.sh@353 -- # local d=1 00:04:40.037 09:38:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.037 09:38:24 -- scripts/common.sh@355 -- # echo 1 00:04:40.037 09:38:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.037 09:38:24 -- scripts/common.sh@366 -- # decimal 2 00:04:40.037 09:38:24 -- scripts/common.sh@353 -- # local d=2 00:04:40.037 09:38:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.037 09:38:24 -- scripts/common.sh@355 -- # echo 2 00:04:40.037 09:38:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.037 09:38:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.037 09:38:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.037 09:38:24 -- scripts/common.sh@368 -- # return 0 00:04:40.037 09:38:24 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.037 09:38:24 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.037 --rc genhtml_branch_coverage=1 00:04:40.037 --rc genhtml_function_coverage=1 00:04:40.037 --rc genhtml_legend=1 00:04:40.037 --rc geninfo_all_blocks=1 00:04:40.037 --rc geninfo_unexecuted_blocks=1 00:04:40.037 00:04:40.037 ' 00:04:40.037 09:38:24 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.037 --rc genhtml_branch_coverage=1 00:04:40.037 --rc genhtml_function_coverage=1 00:04:40.037 --rc genhtml_legend=1 00:04:40.037 --rc geninfo_all_blocks=1 00:04:40.037 --rc geninfo_unexecuted_blocks=1 00:04:40.037 00:04:40.037 ' 00:04:40.037 09:38:24 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.037 --rc genhtml_branch_coverage=1 00:04:40.037 --rc genhtml_function_coverage=1 00:04:40.037 --rc genhtml_legend=1 00:04:40.037 --rc geninfo_all_blocks=1 00:04:40.037 --rc geninfo_unexecuted_blocks=1 00:04:40.037 00:04:40.037 ' 00:04:40.037 09:38:24 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.037 --rc genhtml_branch_coverage=1 00:04:40.037 --rc genhtml_function_coverage=1 00:04:40.037 --rc genhtml_legend=1 00:04:40.037 --rc geninfo_all_blocks=1 00:04:40.037 --rc geninfo_unexecuted_blocks=1 00:04:40.037 00:04:40.037 ' 00:04:40.037 09:38:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.037 09:38:24 -- nvmf/common.sh@7 -- # uname -s 00:04:40.037 09:38:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.037 09:38:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.037 09:38:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.037 09:38:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.037 09:38:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.037 09:38:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.037 09:38:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.037 09:38:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.037 09:38:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.037 09:38:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.037 09:38:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ae20291-71ab-43d0-8891-47a0451aa469 00:04:40.037 09:38:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=1ae20291-71ab-43d0-8891-47a0451aa469 00:04:40.037 09:38:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.037 09:38:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.037 09:38:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.037 09:38:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.037 09:38:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.037 09:38:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.037 09:38:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.037 09:38:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.037 09:38:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.037 09:38:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.037 09:38:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.037 09:38:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.037 09:38:24 -- paths/export.sh@5 -- # export PATH 00:04:40.037 09:38:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.037 09:38:24 -- nvmf/common.sh@51 -- # : 0 00:04:40.037 09:38:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.037 09:38:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.037 09:38:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.037 09:38:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.037 09:38:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.037 09:38:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.037 09:38:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.037 09:38:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.037 09:38:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.037 09:38:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:40.037 09:38:24 -- spdk/autotest.sh@32 -- # uname -s 00:04:40.037 09:38:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:40.037 09:38:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:40.038 09:38:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.038 09:38:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:40.038 09:38:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.038 09:38:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:40.296 09:38:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:40.296 09:38:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:40.296 09:38:24 -- spdk/autotest.sh@48 -- # udevadm_pid=54674 00:04:40.296 09:38:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:40.296 09:38:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:40.296 09:38:24 -- pm/common@17 -- # local monitor 00:04:40.296 09:38:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.296 09:38:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.296 09:38:24 -- pm/common@25 -- # sleep 1 00:04:40.296 09:38:24 -- pm/common@21 -- # date +%s 00:04:40.296 09:38:24 -- pm/common@21 -- # date +%s 00:04:40.296 09:38:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728639504 00:04:40.296 09:38:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728639504 00:04:40.296 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728639504_collect-cpu-load.pm.log 00:04:40.296 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728639504_collect-vmstat.pm.log 00:04:41.236 09:38:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.236 09:38:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:41.236 09:38:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.236 09:38:25 -- common/autotest_common.sh@10 -- # set +x 00:04:41.236 09:38:25 -- spdk/autotest.sh@59 -- # create_test_list 00:04:41.236 09:38:25 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:41.236 09:38:25 -- common/autotest_common.sh@10 -- # set +x 00:04:41.236 09:38:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.236 09:38:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.236 09:38:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.236 09:38:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.236 09:38:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.236 09:38:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:41.236 09:38:25 -- common/autotest_common.sh@1455 -- # uname 00:04:41.236 09:38:25 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:41.236 09:38:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:41.236 09:38:25 -- common/autotest_common.sh@1475 -- # uname 00:04:41.236 09:38:25 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:41.236 09:38:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:41.236 09:38:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:41.500 lcov: LCOV version 1.15 00:04:41.500 09:38:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:59.593 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:59.593 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:14.499 09:38:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:14.499 09:38:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.499 09:38:57 -- common/autotest_common.sh@10 -- # set +x 00:05:14.499 09:38:57 -- spdk/autotest.sh@78 -- # rm -f 00:05:14.499 09:38:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.499 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:14.499 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:14.499 09:38:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:14.499 09:38:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:14.499 09:38:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:14.499 09:38:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:14.500 09:38:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.500 09:38:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:14.500 09:38:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:14.500 09:38:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.500 09:38:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:14.500 09:38:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:14.500 09:38:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.500 09:38:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:14.500 09:38:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:14.500 09:38:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.500 09:38:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:14.500 09:38:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:14.500 09:38:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:14.500 09:38:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.500 09:38:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:14.500 09:38:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.500 09:38:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.500 09:38:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:14.500 09:38:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:14.500 09:38:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:14.500 No valid GPT data, bailing 00:05:14.500 09:38:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:14.500 09:38:59 -- scripts/common.sh@394 -- # pt= 00:05:14.500 09:38:59 -- scripts/common.sh@395 -- # return 1 00:05:14.500 09:38:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:14.500 1+0 records in 00:05:14.500 1+0 records out 00:05:14.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466539 s, 225 MB/s 00:05:14.500 09:38:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.500 09:38:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.500 09:38:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:14.500 09:38:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:14.500 09:38:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:14.500 No valid GPT data, bailing 00:05:14.500 09:38:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:14.500 09:38:59 -- scripts/common.sh@394 -- # pt= 00:05:14.500 09:38:59 -- scripts/common.sh@395 -- # return 1 00:05:14.500 09:38:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:14.500 1+0 records in 00:05:14.500 1+0 records out 00:05:14.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658166 s, 159 MB/s 00:05:14.500 09:38:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.500 09:38:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.500 09:38:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:14.500 09:38:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:14.500 09:38:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:14.759 No valid GPT data, bailing 00:05:14.759 09:38:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:14.759 09:38:59 -- scripts/common.sh@394 -- # pt= 00:05:14.759 09:38:59 -- scripts/common.sh@395 -- # return 1 00:05:14.759 09:38:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:14.759 1+0 records in 00:05:14.759 1+0 records out 00:05:14.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683355 s, 153 MB/s 00:05:14.759 09:38:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.759 09:38:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.759 09:38:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:14.759 09:38:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:14.759 09:38:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:14.759 No valid GPT data, bailing 00:05:14.759 09:38:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:14.759 09:38:59 -- scripts/common.sh@394 -- # pt= 00:05:14.759 09:38:59 -- scripts/common.sh@395 -- # return 1 00:05:14.759 09:38:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:14.759 1+0 records in 00:05:14.759 1+0 records out 00:05:14.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432439 s, 242 MB/s 00:05:14.759 09:38:59 -- spdk/autotest.sh@105 -- # sync 00:05:15.018 09:38:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.018 09:38:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.018 09:38:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:18.328 09:39:02 -- spdk/autotest.sh@111 -- # uname -s 00:05:18.328 09:39:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:18.328 09:39:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:18.328 09:39:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:18.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.587 Hugepages 00:05:18.587 node hugesize free / total 00:05:18.846 node0 1048576kB 0 / 0 00:05:18.846 node0 2048kB 0 / 0 00:05:18.846 00:05:18.846 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.846 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:18.846 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:19.104 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:19.104 09:39:03 -- spdk/autotest.sh@117 -- # uname -s 00:05:19.104 09:39:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:19.104 09:39:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:19.104 09:39:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.043 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.043 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.043 09:39:04 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:20.979 09:39:05 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:20.979 09:39:05 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:20.980 09:39:05 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.980 09:39:05 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:20.980 09:39:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:20.980 09:39:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:20.980 09:39:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.980 09:39:05 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.980 09:39:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:21.238 09:39:05 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:21.238 09:39:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:21.238 09:39:05 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.496 Waiting for block devices as requested 00:05:21.755 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.755 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.755 09:39:06 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:21.755 09:39:06 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:21.755 09:39:06 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:21.755 09:39:06 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:21.755 09:39:06 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:21.755 09:39:06 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:21.755 09:39:06 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:21.755 09:39:06 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:22.075 09:39:06 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:22.075 09:39:06 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:22.075 09:39:06 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:22.075 09:39:06 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:22.075 09:39:06 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1541 -- # continue 00:05:22.075 09:39:06 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:22.075 09:39:06 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:22.075 09:39:06 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:22.075 09:39:06 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:22.075 09:39:06 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:22.075 09:39:06 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:22.075 09:39:06 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:22.075 09:39:06 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:22.075 09:39:06 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:22.075 09:39:06 -- common/autotest_common.sh@1541 -- # continue 00:05:22.075 09:39:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:22.075 09:39:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.075 09:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:22.075 09:39:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:22.075 09:39:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.075 09:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:22.075 09:39:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.014 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.014 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.014 09:39:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:23.014 09:39:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.014 09:39:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.014 09:39:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:23.014 09:39:07 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:23.014 09:39:07 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.014 09:39:07 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:23.014 09:39:07 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:23.014 09:39:07 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:23.014 09:39:07 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:23.014 09:39:07 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:23.014 09:39:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:23.014 09:39:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:23.014 09:39:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.014 09:39:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:23.014 09:39:07 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.274 09:39:07 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:23.274 09:39:07 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:23.274 09:39:07 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:23.274 09:39:07 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:23.274 09:39:07 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:23.274 09:39:07 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.274 09:39:07 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:23.274 09:39:07 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:23.274 09:39:07 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:23.274 09:39:07 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.274 09:39:07 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:23.274 09:39:07 -- common/autotest_common.sh@1570 -- # return 0 00:05:23.274 09:39:07 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:23.274 09:39:07 -- common/autotest_common.sh@1578 -- # return 0 00:05:23.274 09:39:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:23.274 09:39:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:23.274 09:39:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.274 09:39:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.274 09:39:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:23.274 09:39:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.274 09:39:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.274 09:39:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:23.274 09:39:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.274 09:39:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.274 09:39:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.274 09:39:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.274 ************************************ 00:05:23.274 START TEST env 00:05:23.274 ************************************ 00:05:23.274 09:39:07 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.274 * Looking for test storage... 00:05:23.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:23.274 09:39:07 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.274 09:39:07 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.274 09:39:07 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.534 09:39:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.534 09:39:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.534 09:39:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.534 09:39:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.534 09:39:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.534 09:39:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.534 09:39:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.534 09:39:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.534 09:39:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.534 09:39:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.534 09:39:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.534 09:39:07 env -- scripts/common.sh@344 -- # case "$op" in 00:05:23.534 09:39:07 env -- scripts/common.sh@345 -- # : 1 00:05:23.534 09:39:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.534 09:39:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.534 09:39:07 env -- scripts/common.sh@365 -- # decimal 1 00:05:23.534 09:39:07 env -- scripts/common.sh@353 -- # local d=1 00:05:23.534 09:39:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.534 09:39:07 env -- scripts/common.sh@355 -- # echo 1 00:05:23.534 09:39:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.534 09:39:07 env -- scripts/common.sh@366 -- # decimal 2 00:05:23.534 09:39:07 env -- scripts/common.sh@353 -- # local d=2 00:05:23.534 09:39:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.534 09:39:07 env -- scripts/common.sh@355 -- # echo 2 00:05:23.534 09:39:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.534 09:39:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.534 09:39:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.534 09:39:07 env -- scripts/common.sh@368 -- # return 0 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.534 --rc genhtml_branch_coverage=1 00:05:23.534 --rc genhtml_function_coverage=1 00:05:23.534 --rc genhtml_legend=1 00:05:23.534 --rc geninfo_all_blocks=1 00:05:23.534 --rc geninfo_unexecuted_blocks=1 00:05:23.534 00:05:23.534 ' 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.534 --rc genhtml_branch_coverage=1 00:05:23.534 --rc genhtml_function_coverage=1 00:05:23.534 --rc genhtml_legend=1 00:05:23.534 --rc geninfo_all_blocks=1 00:05:23.534 --rc geninfo_unexecuted_blocks=1 00:05:23.534 00:05:23.534 ' 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.534 --rc genhtml_branch_coverage=1 00:05:23.534 --rc genhtml_function_coverage=1 00:05:23.534 --rc genhtml_legend=1 00:05:23.534 --rc geninfo_all_blocks=1 00:05:23.534 --rc geninfo_unexecuted_blocks=1 00:05:23.534 00:05:23.534 ' 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.534 --rc genhtml_branch_coverage=1 00:05:23.534 --rc genhtml_function_coverage=1 00:05:23.534 --rc genhtml_legend=1 00:05:23.534 --rc geninfo_all_blocks=1 00:05:23.534 --rc geninfo_unexecuted_blocks=1 00:05:23.534 00:05:23.534 ' 00:05:23.534 09:39:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.534 09:39:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.534 09:39:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.534 ************************************ 00:05:23.534 START TEST env_memory 00:05:23.534 ************************************ 00:05:23.534 09:39:07 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:23.534 00:05:23.534 00:05:23.534 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.534 http://cunit.sourceforge.net/ 00:05:23.534 00:05:23.534 00:05:23.534 Suite: memory 00:05:23.534 Test: alloc and free memory map ...[2024-10-11 09:39:08.023216] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.534 passed 00:05:23.534 Test: mem map translation ...[2024-10-11 09:39:08.079497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.534 [2024-10-11 09:39:08.079563] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.534 [2024-10-11 09:39:08.079637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.534 [2024-10-11 09:39:08.079665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.534 passed 00:05:23.534 Test: mem map registration ...[2024-10-11 09:39:08.153448] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:23.534 [2024-10-11 09:39:08.153527] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:23.794 passed 00:05:23.794 Test: mem map adjacent registrations ...passed 00:05:23.794 00:05:23.794 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.794 suites 1 1 n/a 0 0 00:05:23.794 tests 4 4 4 0 0 00:05:23.794 asserts 152 152 152 0 n/a 00:05:23.794 00:05:23.794 Elapsed time = 0.289 seconds 00:05:23.794 00:05:23.794 real 0m0.330s 00:05:23.794 user 0m0.296s 00:05:23.794 sys 0m0.026s 00:05:23.794 09:39:08 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.794 09:39:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:23.794 ************************************ 00:05:23.794 END TEST env_memory 00:05:23.794 ************************************ 00:05:23.794 09:39:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:23.794 09:39:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.794 09:39:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.794 09:39:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.794 ************************************ 00:05:23.794 START TEST env_vtophys 00:05:23.794 ************************************ 00:05:23.794 09:39:08 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:23.794 EAL: lib.eal log level changed from notice to debug 00:05:23.794 EAL: Detected lcore 0 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 1 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 2 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 3 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 4 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 5 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 6 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 7 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 8 as core 0 on socket 0 00:05:23.794 EAL: Detected lcore 9 as core 0 on socket 0 00:05:23.794 EAL: Maximum logical cores by configuration: 128 00:05:23.794 EAL: Detected CPU lcores: 10 00:05:23.794 EAL: Detected NUMA nodes: 1 00:05:23.794 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:23.794 EAL: Detected shared linkage of DPDK 00:05:24.054 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.054 EAL: Selected IOVA mode 'PA' 00:05:24.054 EAL: Probing VFIO support... 00:05:24.054 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.054 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:24.054 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.054 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.054 EAL: Setting up physically contiguous memory... 00:05:24.054 EAL: Setting maximum number of open files to 524288 00:05:24.054 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.054 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.054 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.054 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.054 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.054 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.054 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.054 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.054 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.054 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.054 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.054 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.054 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.054 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.054 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.054 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.054 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.054 EAL: Hugepages will be freed exactly as allocated. 00:05:24.054 EAL: No shared files mode enabled, IPC is disabled 00:05:24.054 EAL: No shared files mode enabled, IPC is disabled 00:05:24.054 EAL: TSC frequency is ~2290000 KHz 00:05:24.054 EAL: Main lcore 0 is ready (tid=7f3fa7500a40;cpuset=[0]) 00:05:24.054 EAL: Trying to obtain current memory policy. 00:05:24.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.054 EAL: Restoring previous memory policy: 0 00:05:24.054 EAL: request: mp_malloc_sync 00:05:24.054 EAL: No shared files mode enabled, IPC is disabled 00:05:24.054 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.054 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.054 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.054 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:24.054 00:05:24.054 00:05:24.054 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.054 http://cunit.sourceforge.net/ 00:05:24.054 00:05:24.054 00:05:24.054 Suite: components_suite 00:05:24.621 Test: vtophys_malloc_test ...passed 00:05:24.621 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.621 EAL: Restoring previous memory policy: 4 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.621 EAL: Trying to obtain current memory policy. 00:05:24.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.621 EAL: Restoring previous memory policy: 4 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was expanded by 6MB 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was shrunk by 6MB 00:05:24.621 EAL: Trying to obtain current memory policy. 00:05:24.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.621 EAL: Restoring previous memory policy: 4 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was expanded by 10MB 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was shrunk by 10MB 00:05:24.621 EAL: Trying to obtain current memory policy. 00:05:24.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.621 EAL: Restoring previous memory policy: 4 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was expanded by 18MB 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was shrunk by 18MB 00:05:24.621 EAL: Trying to obtain current memory policy. 00:05:24.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.621 EAL: Restoring previous memory policy: 4 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.621 EAL: Trying to obtain current memory policy. 00:05:24.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.621 EAL: Restoring previous memory policy: 4 00:05:24.621 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.621 EAL: request: mp_malloc_sync 00:05:24.621 EAL: No shared files mode enabled, IPC is disabled 00:05:24.621 EAL: Heap on socket 0 was expanded by 66MB 00:05:24.880 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.880 EAL: request: mp_malloc_sync 00:05:24.880 EAL: No shared files mode enabled, IPC is disabled 00:05:24.880 EAL: Heap on socket 0 was shrunk by 66MB 00:05:24.880 EAL: Trying to obtain current memory policy. 00:05:24.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.138 EAL: Restoring previous memory policy: 4 00:05:25.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.138 EAL: request: mp_malloc_sync 00:05:25.138 EAL: No shared files mode enabled, IPC is disabled 00:05:25.138 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.397 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.397 EAL: request: mp_malloc_sync 00:05:25.397 EAL: No shared files mode enabled, IPC is disabled 00:05:25.397 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.397 EAL: Trying to obtain current memory policy. 00:05:25.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.655 EAL: Restoring previous memory policy: 4 00:05:25.655 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.655 EAL: request: mp_malloc_sync 00:05:25.655 EAL: No shared files mode enabled, IPC is disabled 00:05:25.655 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.221 EAL: request: mp_malloc_sync 00:05:26.221 EAL: No shared files mode enabled, IPC is disabled 00:05:26.221 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.480 EAL: Trying to obtain current memory policy. 00:05:26.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.739 EAL: Restoring previous memory policy: 4 00:05:26.739 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.739 EAL: request: mp_malloc_sync 00:05:26.739 EAL: No shared files mode enabled, IPC is disabled 00:05:26.739 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.679 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.679 EAL: request: mp_malloc_sync 00:05:27.679 EAL: No shared files mode enabled, IPC is disabled 00:05:27.679 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.618 EAL: Trying to obtain current memory policy. 00:05:28.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.878 EAL: Restoring previous memory policy: 4 00:05:28.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.878 EAL: request: mp_malloc_sync 00:05:28.878 EAL: No shared files mode enabled, IPC is disabled 00:05:28.878 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.414 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.414 EAL: request: mp_malloc_sync 00:05:31.414 EAL: No shared files mode enabled, IPC is disabled 00:05:31.414 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.799 passed 00:05:32.799 00:05:32.799 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.799 suites 1 1 n/a 0 0 00:05:32.799 tests 2 2 2 0 0 00:05:32.799 asserts 5453 5453 5453 0 n/a 00:05:32.799 00:05:32.799 Elapsed time = 8.763 seconds 00:05:32.799 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.799 EAL: request: mp_malloc_sync 00:05:32.799 EAL: No shared files mode enabled, IPC is disabled 00:05:32.799 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.799 EAL: No shared files mode enabled, IPC is disabled 00:05:32.799 EAL: No shared files mode enabled, IPC is disabled 00:05:32.799 EAL: No shared files mode enabled, IPC is disabled 00:05:33.070 00:05:33.070 real 0m9.098s 00:05:33.070 user 0m8.072s 00:05:33.070 sys 0m0.863s 00:05:33.070 09:39:17 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.070 09:39:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 ************************************ 00:05:33.070 END TEST env_vtophys 00:05:33.071 ************************************ 00:05:33.071 09:39:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:33.071 09:39:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.071 09:39:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.071 09:39:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 ************************************ 00:05:33.071 START TEST env_pci 00:05:33.071 ************************************ 00:05:33.071 09:39:17 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:33.071 00:05:33.071 00:05:33.071 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.071 http://cunit.sourceforge.net/ 00:05:33.071 00:05:33.071 00:05:33.071 Suite: pci 00:05:33.071 Test: pci_hook ...[2024-10-11 09:39:17.544503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57005 has claimed it 00:05:33.071 passed 00:05:33.071 00:05:33.071 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.071 suites 1 1 n/a 0 0 00:05:33.071 tests 1 1 1 0 0 00:05:33.071 asserts 25 25 25 0 n/a 00:05:33.071 00:05:33.071 Elapsed time = 0.006 seconds 00:05:33.071 EAL: Cannot find device (10000:00:01.0) 00:05:33.071 EAL: Failed to attach device on primary process 00:05:33.071 00:05:33.071 real 0m0.095s 00:05:33.071 user 0m0.039s 00:05:33.071 sys 0m0.055s 00:05:33.071 09:39:17 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.071 09:39:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 ************************************ 00:05:33.071 END TEST env_pci 00:05:33.071 ************************************ 00:05:33.071 09:39:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:33.071 09:39:17 env -- env/env.sh@15 -- # uname 00:05:33.071 09:39:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:33.071 09:39:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:33.071 09:39:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.071 09:39:17 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:33.071 09:39:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.071 09:39:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 ************************************ 00:05:33.071 START TEST env_dpdk_post_init 00:05:33.071 ************************************ 00:05:33.071 09:39:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.328 EAL: Detected CPU lcores: 10 00:05:33.328 EAL: Detected NUMA nodes: 1 00:05:33.328 EAL: Detected shared linkage of DPDK 00:05:33.328 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.328 EAL: Selected IOVA mode 'PA' 00:05:33.328 Starting DPDK initialization... 00:05:33.328 Starting SPDK post initialization... 00:05:33.328 SPDK NVMe probe 00:05:33.328 Attaching to 0000:00:10.0 00:05:33.328 Attaching to 0000:00:11.0 00:05:33.328 Attached to 0000:00:10.0 00:05:33.328 Attached to 0000:00:11.0 00:05:33.328 Cleaning up... 00:05:33.328 00:05:33.328 real 0m0.281s 00:05:33.328 user 0m0.088s 00:05:33.328 sys 0m0.095s 00:05:33.328 09:39:17 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.328 09:39:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.328 ************************************ 00:05:33.328 END TEST env_dpdk_post_init 00:05:33.328 ************************************ 00:05:33.587 09:39:18 env -- env/env.sh@26 -- # uname 00:05:33.587 09:39:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.587 09:39:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.587 09:39:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.587 09:39:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.587 09:39:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.587 ************************************ 00:05:33.587 START TEST env_mem_callbacks 00:05:33.587 ************************************ 00:05:33.587 09:39:18 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.587 EAL: Detected CPU lcores: 10 00:05:33.587 EAL: Detected NUMA nodes: 1 00:05:33.587 EAL: Detected shared linkage of DPDK 00:05:33.587 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.587 EAL: Selected IOVA mode 'PA' 00:05:33.587 00:05:33.587 00:05:33.587 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.587 http://cunit.sourceforge.net/ 00:05:33.587 00:05:33.587 00:05:33.587 Suite: memory 00:05:33.587 Test: test ... 00:05:33.587 register 0x200000200000 2097152 00:05:33.587 malloc 3145728 00:05:33.587 register 0x200000400000 4194304 00:05:33.847 buf 0x2000004fffc0 len 3145728 PASSED 00:05:33.847 malloc 64 00:05:33.847 buf 0x2000004ffec0 len 64 PASSED 00:05:33.847 malloc 4194304 00:05:33.847 register 0x200000800000 6291456 00:05:33.847 buf 0x2000009fffc0 len 4194304 PASSED 00:05:33.847 free 0x2000004fffc0 3145728 00:05:33.847 free 0x2000004ffec0 64 00:05:33.847 unregister 0x200000400000 4194304 PASSED 00:05:33.847 free 0x2000009fffc0 4194304 00:05:33.847 unregister 0x200000800000 6291456 PASSED 00:05:33.847 malloc 8388608 00:05:33.847 register 0x200000400000 10485760 00:05:33.847 buf 0x2000005fffc0 len 8388608 PASSED 00:05:33.847 free 0x2000005fffc0 8388608 00:05:33.847 unregister 0x200000400000 10485760 PASSED 00:05:33.847 passed 00:05:33.847 00:05:33.847 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.847 suites 1 1 n/a 0 0 00:05:33.847 tests 1 1 1 0 0 00:05:33.847 asserts 15 15 15 0 n/a 00:05:33.847 00:05:33.847 Elapsed time = 0.094 seconds 00:05:33.847 00:05:33.847 real 0m0.304s 00:05:33.847 user 0m0.119s 00:05:33.847 sys 0m0.082s 00:05:33.847 09:39:18 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.847 09:39:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.847 ************************************ 00:05:33.847 END TEST env_mem_callbacks 00:05:33.847 ************************************ 00:05:33.847 00:05:33.847 real 0m10.680s 00:05:33.847 user 0m8.839s 00:05:33.847 sys 0m1.490s 00:05:33.847 09:39:18 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.847 09:39:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.847 ************************************ 00:05:33.847 END TEST env 00:05:33.847 ************************************ 00:05:33.847 09:39:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.847 09:39:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.847 09:39:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.847 09:39:18 -- common/autotest_common.sh@10 -- # set +x 00:05:33.847 ************************************ 00:05:33.847 START TEST rpc 00:05:33.847 ************************************ 00:05:33.847 09:39:18 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.105 * Looking for test storage... 00:05:34.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.105 09:39:18 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.105 09:39:18 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.105 09:39:18 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:34.105 09:39:18 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:34.105 09:39:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.105 09:39:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.105 09:39:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.105 09:39:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.105 09:39:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.105 09:39:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.105 09:39:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.105 09:39:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.105 09:39:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.105 09:39:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.105 09:39:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.106 09:39:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.106 09:39:18 rpc -- scripts/common.sh@345 -- # : 1 00:05:34.106 09:39:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.106 09:39:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.106 09:39:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.106 09:39:18 rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.106 09:39:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.106 09:39:18 rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.106 09:39:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.106 09:39:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.106 09:39:18 rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.106 09:39:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.106 09:39:18 rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.106 09:39:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.106 09:39:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.106 09:39:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.106 09:39:18 rpc -- scripts/common.sh@368 -- # return 0 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.106 --rc genhtml_branch_coverage=1 00:05:34.106 --rc genhtml_function_coverage=1 00:05:34.106 --rc genhtml_legend=1 00:05:34.106 --rc geninfo_all_blocks=1 00:05:34.106 --rc geninfo_unexecuted_blocks=1 00:05:34.106 00:05:34.106 ' 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.106 --rc genhtml_branch_coverage=1 00:05:34.106 --rc genhtml_function_coverage=1 00:05:34.106 --rc genhtml_legend=1 00:05:34.106 --rc geninfo_all_blocks=1 00:05:34.106 --rc geninfo_unexecuted_blocks=1 00:05:34.106 00:05:34.106 ' 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.106 --rc genhtml_branch_coverage=1 00:05:34.106 --rc genhtml_function_coverage=1 00:05:34.106 --rc genhtml_legend=1 00:05:34.106 --rc geninfo_all_blocks=1 00:05:34.106 --rc geninfo_unexecuted_blocks=1 00:05:34.106 00:05:34.106 ' 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:34.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.106 --rc genhtml_branch_coverage=1 00:05:34.106 --rc genhtml_function_coverage=1 00:05:34.106 --rc genhtml_legend=1 00:05:34.106 --rc geninfo_all_blocks=1 00:05:34.106 --rc geninfo_unexecuted_blocks=1 00:05:34.106 00:05:34.106 ' 00:05:34.106 09:39:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:34.106 09:39:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57132 00:05:34.106 09:39:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.106 09:39:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57132 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@831 -- # '[' -z 57132 ']' 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.106 09:39:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.364 [2024-10-11 09:39:18.796322] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:34.364 [2024-10-11 09:39:18.796487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57132 ] 00:05:34.364 [2024-10-11 09:39:18.966354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.623 [2024-10-11 09:39:19.098171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.623 [2024-10-11 09:39:19.098266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57132' to capture a snapshot of events at runtime. 00:05:34.623 [2024-10-11 09:39:19.098278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.623 [2024-10-11 09:39:19.098290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.623 [2024-10-11 09:39:19.098299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57132 for offline analysis/debug. 00:05:34.623 [2024-10-11 09:39:19.099907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.557 09:39:20 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.557 09:39:20 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:35.557 09:39:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.557 09:39:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.557 09:39:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:35.557 09:39:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:35.557 09:39:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.557 09:39:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.557 09:39:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.557 ************************************ 00:05:35.557 START TEST rpc_integrity 00:05:35.557 ************************************ 00:05:35.557 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:35.557 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.557 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.557 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.557 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.557 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.557 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.817 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.817 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.817 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.817 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.817 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.817 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.818 { 00:05:35.818 "name": "Malloc0", 00:05:35.818 "aliases": [ 00:05:35.818 "6ff2ecb9-fa5b-4254-9dd6-73ac62e7f3f6" 00:05:35.818 ], 00:05:35.818 "product_name": "Malloc disk", 00:05:35.818 "block_size": 512, 00:05:35.818 "num_blocks": 16384, 00:05:35.818 "uuid": "6ff2ecb9-fa5b-4254-9dd6-73ac62e7f3f6", 00:05:35.818 "assigned_rate_limits": { 00:05:35.818 "rw_ios_per_sec": 0, 00:05:35.818 "rw_mbytes_per_sec": 0, 00:05:35.818 "r_mbytes_per_sec": 0, 00:05:35.818 "w_mbytes_per_sec": 0 00:05:35.818 }, 00:05:35.818 "claimed": false, 00:05:35.818 "zoned": false, 00:05:35.818 "supported_io_types": { 00:05:35.818 "read": true, 00:05:35.818 "write": true, 00:05:35.818 "unmap": true, 00:05:35.818 "flush": true, 00:05:35.818 "reset": true, 00:05:35.818 "nvme_admin": false, 00:05:35.818 "nvme_io": false, 00:05:35.818 "nvme_io_md": false, 00:05:35.818 "write_zeroes": true, 00:05:35.818 "zcopy": true, 00:05:35.818 "get_zone_info": false, 00:05:35.818 "zone_management": false, 00:05:35.818 "zone_append": false, 00:05:35.818 "compare": false, 00:05:35.818 "compare_and_write": false, 00:05:35.818 "abort": true, 00:05:35.818 "seek_hole": false, 00:05:35.818 "seek_data": false, 00:05:35.818 "copy": true, 00:05:35.818 "nvme_iov_md": false 00:05:35.818 }, 00:05:35.818 "memory_domains": [ 00:05:35.818 { 00:05:35.818 "dma_device_id": "system", 00:05:35.818 "dma_device_type": 1 00:05:35.818 }, 00:05:35.818 { 00:05:35.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.818 "dma_device_type": 2 00:05:35.818 } 00:05:35.818 ], 00:05:35.818 "driver_specific": {} 00:05:35.818 } 00:05:35.818 ]' 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.818 [2024-10-11 09:39:20.313801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:35.818 [2024-10-11 09:39:20.313908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.818 [2024-10-11 09:39:20.313944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:35.818 [2024-10-11 09:39:20.313959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.818 [2024-10-11 09:39:20.316687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.818 [2024-10-11 09:39:20.316773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.818 Passthru0 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.818 { 00:05:35.818 "name": "Malloc0", 00:05:35.818 "aliases": [ 00:05:35.818 "6ff2ecb9-fa5b-4254-9dd6-73ac62e7f3f6" 00:05:35.818 ], 00:05:35.818 "product_name": "Malloc disk", 00:05:35.818 "block_size": 512, 00:05:35.818 "num_blocks": 16384, 00:05:35.818 "uuid": "6ff2ecb9-fa5b-4254-9dd6-73ac62e7f3f6", 00:05:35.818 "assigned_rate_limits": { 00:05:35.818 "rw_ios_per_sec": 0, 00:05:35.818 "rw_mbytes_per_sec": 0, 00:05:35.818 "r_mbytes_per_sec": 0, 00:05:35.818 "w_mbytes_per_sec": 0 00:05:35.818 }, 00:05:35.818 "claimed": true, 00:05:35.818 "claim_type": "exclusive_write", 00:05:35.818 "zoned": false, 00:05:35.818 "supported_io_types": { 00:05:35.818 "read": true, 00:05:35.818 "write": true, 00:05:35.818 "unmap": true, 00:05:35.818 "flush": true, 00:05:35.818 "reset": true, 00:05:35.818 "nvme_admin": false, 00:05:35.818 "nvme_io": false, 00:05:35.818 "nvme_io_md": false, 00:05:35.818 "write_zeroes": true, 00:05:35.818 "zcopy": true, 00:05:35.818 "get_zone_info": false, 00:05:35.818 "zone_management": false, 00:05:35.818 "zone_append": false, 00:05:35.818 "compare": false, 00:05:35.818 "compare_and_write": false, 00:05:35.818 "abort": true, 00:05:35.818 "seek_hole": false, 00:05:35.818 "seek_data": false, 00:05:35.818 "copy": true, 00:05:35.818 "nvme_iov_md": false 00:05:35.818 }, 00:05:35.818 "memory_domains": [ 00:05:35.818 { 00:05:35.818 "dma_device_id": "system", 00:05:35.818 "dma_device_type": 1 00:05:35.818 }, 00:05:35.818 { 00:05:35.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.818 "dma_device_type": 2 00:05:35.818 } 00:05:35.818 ], 00:05:35.818 "driver_specific": {} 00:05:35.818 }, 00:05:35.818 { 00:05:35.818 "name": "Passthru0", 00:05:35.818 "aliases": [ 00:05:35.818 "0e31075a-6a65-5224-a61d-4cee9506df4c" 00:05:35.818 ], 00:05:35.818 "product_name": "passthru", 00:05:35.818 "block_size": 512, 00:05:35.818 "num_blocks": 16384, 00:05:35.818 "uuid": "0e31075a-6a65-5224-a61d-4cee9506df4c", 00:05:35.818 "assigned_rate_limits": { 00:05:35.818 "rw_ios_per_sec": 0, 00:05:35.818 "rw_mbytes_per_sec": 0, 00:05:35.818 "r_mbytes_per_sec": 0, 00:05:35.818 "w_mbytes_per_sec": 0 00:05:35.818 }, 00:05:35.818 "claimed": false, 00:05:35.818 "zoned": false, 00:05:35.818 "supported_io_types": { 00:05:35.818 "read": true, 00:05:35.818 "write": true, 00:05:35.818 "unmap": true, 00:05:35.818 "flush": true, 00:05:35.818 "reset": true, 00:05:35.818 "nvme_admin": false, 00:05:35.818 "nvme_io": false, 00:05:35.818 "nvme_io_md": false, 00:05:35.818 "write_zeroes": true, 00:05:35.818 "zcopy": true, 00:05:35.818 "get_zone_info": false, 00:05:35.818 "zone_management": false, 00:05:35.818 "zone_append": false, 00:05:35.818 "compare": false, 00:05:35.818 "compare_and_write": false, 00:05:35.818 "abort": true, 00:05:35.818 "seek_hole": false, 00:05:35.818 "seek_data": false, 00:05:35.818 "copy": true, 00:05:35.818 "nvme_iov_md": false 00:05:35.818 }, 00:05:35.818 "memory_domains": [ 00:05:35.818 { 00:05:35.818 "dma_device_id": "system", 00:05:35.818 "dma_device_type": 1 00:05:35.818 }, 00:05:35.818 { 00:05:35.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.818 "dma_device_type": 2 00:05:35.818 } 00:05:35.818 ], 00:05:35.818 "driver_specific": { 00:05:35.818 "passthru": { 00:05:35.818 "name": "Passthru0", 00:05:35.818 "base_bdev_name": "Malloc0" 00:05:35.818 } 00:05:35.818 } 00:05:35.818 } 00:05:35.818 ]' 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.818 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.818 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.078 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.078 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.078 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.078 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.078 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.078 ************************************ 00:05:36.078 END TEST rpc_integrity 00:05:36.078 ************************************ 00:05:36.078 09:39:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.078 00:05:36.078 real 0m0.382s 00:05:36.078 user 0m0.205s 00:05:36.078 sys 0m0.062s 00:05:36.078 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.078 09:39:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 09:39:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.078 09:39:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.078 09:39:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.078 09:39:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 ************************************ 00:05:36.078 START TEST rpc_plugins 00:05:36.078 ************************************ 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:36.078 { 00:05:36.078 "name": "Malloc1", 00:05:36.078 "aliases": [ 00:05:36.078 "a4201193-c68a-4ba1-8fa4-39bc86e00133" 00:05:36.078 ], 00:05:36.078 "product_name": "Malloc disk", 00:05:36.078 "block_size": 4096, 00:05:36.078 "num_blocks": 256, 00:05:36.078 "uuid": "a4201193-c68a-4ba1-8fa4-39bc86e00133", 00:05:36.078 "assigned_rate_limits": { 00:05:36.078 "rw_ios_per_sec": 0, 00:05:36.078 "rw_mbytes_per_sec": 0, 00:05:36.078 "r_mbytes_per_sec": 0, 00:05:36.078 "w_mbytes_per_sec": 0 00:05:36.078 }, 00:05:36.078 "claimed": false, 00:05:36.078 "zoned": false, 00:05:36.078 "supported_io_types": { 00:05:36.078 "read": true, 00:05:36.078 "write": true, 00:05:36.078 "unmap": true, 00:05:36.078 "flush": true, 00:05:36.078 "reset": true, 00:05:36.078 "nvme_admin": false, 00:05:36.078 "nvme_io": false, 00:05:36.078 "nvme_io_md": false, 00:05:36.078 "write_zeroes": true, 00:05:36.078 "zcopy": true, 00:05:36.078 "get_zone_info": false, 00:05:36.078 "zone_management": false, 00:05:36.078 "zone_append": false, 00:05:36.078 "compare": false, 00:05:36.078 "compare_and_write": false, 00:05:36.078 "abort": true, 00:05:36.078 "seek_hole": false, 00:05:36.078 "seek_data": false, 00:05:36.078 "copy": true, 00:05:36.078 "nvme_iov_md": false 00:05:36.078 }, 00:05:36.078 "memory_domains": [ 00:05:36.078 { 00:05:36.078 "dma_device_id": "system", 00:05:36.078 "dma_device_type": 1 00:05:36.078 }, 00:05:36.078 { 00:05:36.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.078 "dma_device_type": 2 00:05:36.078 } 00:05:36.078 ], 00:05:36.078 "driver_specific": {} 00:05:36.078 } 00:05:36.078 ]' 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:36.078 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.078 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.337 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.337 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:36.338 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.338 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.338 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.338 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:36.338 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:36.338 ************************************ 00:05:36.338 END TEST rpc_plugins 00:05:36.338 ************************************ 00:05:36.338 09:39:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:36.338 00:05:36.338 real 0m0.180s 00:05:36.338 user 0m0.102s 00:05:36.338 sys 0m0.029s 00:05:36.338 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.338 09:39:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.338 09:39:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:36.338 09:39:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.338 09:39:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.338 09:39:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.338 ************************************ 00:05:36.338 START TEST rpc_trace_cmd_test 00:05:36.338 ************************************ 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:36.338 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57132", 00:05:36.338 "tpoint_group_mask": "0x8", 00:05:36.338 "iscsi_conn": { 00:05:36.338 "mask": "0x2", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "scsi": { 00:05:36.338 "mask": "0x4", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "bdev": { 00:05:36.338 "mask": "0x8", 00:05:36.338 "tpoint_mask": "0xffffffffffffffff" 00:05:36.338 }, 00:05:36.338 "nvmf_rdma": { 00:05:36.338 "mask": "0x10", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "nvmf_tcp": { 00:05:36.338 "mask": "0x20", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "ftl": { 00:05:36.338 "mask": "0x40", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "blobfs": { 00:05:36.338 "mask": "0x80", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "dsa": { 00:05:36.338 "mask": "0x200", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "thread": { 00:05:36.338 "mask": "0x400", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "nvme_pcie": { 00:05:36.338 "mask": "0x800", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "iaa": { 00:05:36.338 "mask": "0x1000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "nvme_tcp": { 00:05:36.338 "mask": "0x2000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "bdev_nvme": { 00:05:36.338 "mask": "0x4000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "sock": { 00:05:36.338 "mask": "0x8000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "blob": { 00:05:36.338 "mask": "0x10000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "bdev_raid": { 00:05:36.338 "mask": "0x20000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 }, 00:05:36.338 "scheduler": { 00:05:36.338 "mask": "0x40000", 00:05:36.338 "tpoint_mask": "0x0" 00:05:36.338 } 00:05:36.338 }' 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:36.338 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:36.597 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:36.597 09:39:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:36.597 ************************************ 00:05:36.597 END TEST rpc_trace_cmd_test 00:05:36.597 ************************************ 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:36.597 00:05:36.597 real 0m0.280s 00:05:36.597 user 0m0.214s 00:05:36.597 sys 0m0.051s 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.597 09:39:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.597 09:39:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:36.597 09:39:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:36.597 09:39:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:36.597 09:39:21 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.597 09:39:21 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.597 09:39:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.597 ************************************ 00:05:36.597 START TEST rpc_daemon_integrity 00:05:36.597 ************************************ 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.597 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.856 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.856 { 00:05:36.856 "name": "Malloc2", 00:05:36.856 "aliases": [ 00:05:36.856 "a9609dc5-75a3-4834-9597-66bef6a3f7f3" 00:05:36.856 ], 00:05:36.856 "product_name": "Malloc disk", 00:05:36.856 "block_size": 512, 00:05:36.856 "num_blocks": 16384, 00:05:36.856 "uuid": "a9609dc5-75a3-4834-9597-66bef6a3f7f3", 00:05:36.856 "assigned_rate_limits": { 00:05:36.856 "rw_ios_per_sec": 0, 00:05:36.856 "rw_mbytes_per_sec": 0, 00:05:36.856 "r_mbytes_per_sec": 0, 00:05:36.856 "w_mbytes_per_sec": 0 00:05:36.856 }, 00:05:36.856 "claimed": false, 00:05:36.856 "zoned": false, 00:05:36.856 "supported_io_types": { 00:05:36.856 "read": true, 00:05:36.856 "write": true, 00:05:36.856 "unmap": true, 00:05:36.856 "flush": true, 00:05:36.856 "reset": true, 00:05:36.856 "nvme_admin": false, 00:05:36.856 "nvme_io": false, 00:05:36.856 "nvme_io_md": false, 00:05:36.856 "write_zeroes": true, 00:05:36.856 "zcopy": true, 00:05:36.856 "get_zone_info": false, 00:05:36.856 "zone_management": false, 00:05:36.856 "zone_append": false, 00:05:36.856 "compare": false, 00:05:36.856 "compare_and_write": false, 00:05:36.856 "abort": true, 00:05:36.856 "seek_hole": false, 00:05:36.856 "seek_data": false, 00:05:36.856 "copy": true, 00:05:36.856 "nvme_iov_md": false 00:05:36.856 }, 00:05:36.856 "memory_domains": [ 00:05:36.857 { 00:05:36.857 "dma_device_id": "system", 00:05:36.857 "dma_device_type": 1 00:05:36.857 }, 00:05:36.857 { 00:05:36.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.857 "dma_device_type": 2 00:05:36.857 } 00:05:36.857 ], 00:05:36.857 "driver_specific": {} 00:05:36.857 } 00:05:36.857 ]' 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.857 [2024-10-11 09:39:21.371897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:36.857 [2024-10-11 09:39:21.372090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.857 [2024-10-11 09:39:21.372123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:36.857 [2024-10-11 09:39:21.372137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.857 [2024-10-11 09:39:21.374817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.857 [2024-10-11 09:39:21.374872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.857 Passthru0 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.857 { 00:05:36.857 "name": "Malloc2", 00:05:36.857 "aliases": [ 00:05:36.857 "a9609dc5-75a3-4834-9597-66bef6a3f7f3" 00:05:36.857 ], 00:05:36.857 "product_name": "Malloc disk", 00:05:36.857 "block_size": 512, 00:05:36.857 "num_blocks": 16384, 00:05:36.857 "uuid": "a9609dc5-75a3-4834-9597-66bef6a3f7f3", 00:05:36.857 "assigned_rate_limits": { 00:05:36.857 "rw_ios_per_sec": 0, 00:05:36.857 "rw_mbytes_per_sec": 0, 00:05:36.857 "r_mbytes_per_sec": 0, 00:05:36.857 "w_mbytes_per_sec": 0 00:05:36.857 }, 00:05:36.857 "claimed": true, 00:05:36.857 "claim_type": "exclusive_write", 00:05:36.857 "zoned": false, 00:05:36.857 "supported_io_types": { 00:05:36.857 "read": true, 00:05:36.857 "write": true, 00:05:36.857 "unmap": true, 00:05:36.857 "flush": true, 00:05:36.857 "reset": true, 00:05:36.857 "nvme_admin": false, 00:05:36.857 "nvme_io": false, 00:05:36.857 "nvme_io_md": false, 00:05:36.857 "write_zeroes": true, 00:05:36.857 "zcopy": true, 00:05:36.857 "get_zone_info": false, 00:05:36.857 "zone_management": false, 00:05:36.857 "zone_append": false, 00:05:36.857 "compare": false, 00:05:36.857 "compare_and_write": false, 00:05:36.857 "abort": true, 00:05:36.857 "seek_hole": false, 00:05:36.857 "seek_data": false, 00:05:36.857 "copy": true, 00:05:36.857 "nvme_iov_md": false 00:05:36.857 }, 00:05:36.857 "memory_domains": [ 00:05:36.857 { 00:05:36.857 "dma_device_id": "system", 00:05:36.857 "dma_device_type": 1 00:05:36.857 }, 00:05:36.857 { 00:05:36.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.857 "dma_device_type": 2 00:05:36.857 } 00:05:36.857 ], 00:05:36.857 "driver_specific": {} 00:05:36.857 }, 00:05:36.857 { 00:05:36.857 "name": "Passthru0", 00:05:36.857 "aliases": [ 00:05:36.857 "a04f9c82-5563-5c66-9ecd-b35b1b048be8" 00:05:36.857 ], 00:05:36.857 "product_name": "passthru", 00:05:36.857 "block_size": 512, 00:05:36.857 "num_blocks": 16384, 00:05:36.857 "uuid": "a04f9c82-5563-5c66-9ecd-b35b1b048be8", 00:05:36.857 "assigned_rate_limits": { 00:05:36.857 "rw_ios_per_sec": 0, 00:05:36.857 "rw_mbytes_per_sec": 0, 00:05:36.857 "r_mbytes_per_sec": 0, 00:05:36.857 "w_mbytes_per_sec": 0 00:05:36.857 }, 00:05:36.857 "claimed": false, 00:05:36.857 "zoned": false, 00:05:36.857 "supported_io_types": { 00:05:36.857 "read": true, 00:05:36.857 "write": true, 00:05:36.857 "unmap": true, 00:05:36.857 "flush": true, 00:05:36.857 "reset": true, 00:05:36.857 "nvme_admin": false, 00:05:36.857 "nvme_io": false, 00:05:36.857 "nvme_io_md": false, 00:05:36.857 "write_zeroes": true, 00:05:36.857 "zcopy": true, 00:05:36.857 "get_zone_info": false, 00:05:36.857 "zone_management": false, 00:05:36.857 "zone_append": false, 00:05:36.857 "compare": false, 00:05:36.857 "compare_and_write": false, 00:05:36.857 "abort": true, 00:05:36.857 "seek_hole": false, 00:05:36.857 "seek_data": false, 00:05:36.857 "copy": true, 00:05:36.857 "nvme_iov_md": false 00:05:36.857 }, 00:05:36.857 "memory_domains": [ 00:05:36.857 { 00:05:36.857 "dma_device_id": "system", 00:05:36.857 "dma_device_type": 1 00:05:36.857 }, 00:05:36.857 { 00:05:36.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.857 "dma_device_type": 2 00:05:36.857 } 00:05:36.857 ], 00:05:36.857 "driver_specific": { 00:05:36.857 "passthru": { 00:05:36.857 "name": "Passthru0", 00:05:36.857 "base_bdev_name": "Malloc2" 00:05:36.857 } 00:05:36.857 } 00:05:36.857 } 00:05:36.857 ]' 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.857 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.116 00:05:37.116 real 0m0.377s 00:05:37.116 user 0m0.194s 00:05:37.116 sys 0m0.072s 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.116 ************************************ 00:05:37.116 END TEST rpc_daemon_integrity 00:05:37.116 ************************************ 00:05:37.116 09:39:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.116 09:39:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:37.116 09:39:21 rpc -- rpc/rpc.sh@84 -- # killprocess 57132 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@950 -- # '[' -z 57132 ']' 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@954 -- # kill -0 57132 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57132 00:05:37.116 killing process with pid 57132 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.116 09:39:21 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.117 09:39:21 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57132' 00:05:37.117 09:39:21 rpc -- common/autotest_common.sh@969 -- # kill 57132 00:05:37.117 09:39:21 rpc -- common/autotest_common.sh@974 -- # wait 57132 00:05:39.651 00:05:39.651 real 0m5.788s 00:05:39.651 user 0m6.366s 00:05:39.651 sys 0m1.059s 00:05:39.651 09:39:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.651 ************************************ 00:05:39.651 END TEST rpc 00:05:39.651 ************************************ 00:05:39.651 09:39:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.910 09:39:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:39.910 09:39:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.910 09:39:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.910 09:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:39.910 ************************************ 00:05:39.910 START TEST skip_rpc 00:05:39.910 ************************************ 00:05:39.910 09:39:24 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:39.910 * Looking for test storage... 00:05:39.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.911 09:39:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.911 --rc genhtml_branch_coverage=1 00:05:39.911 --rc genhtml_function_coverage=1 00:05:39.911 --rc genhtml_legend=1 00:05:39.911 --rc geninfo_all_blocks=1 00:05:39.911 --rc geninfo_unexecuted_blocks=1 00:05:39.911 00:05:39.911 ' 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.911 --rc genhtml_branch_coverage=1 00:05:39.911 --rc genhtml_function_coverage=1 00:05:39.911 --rc genhtml_legend=1 00:05:39.911 --rc geninfo_all_blocks=1 00:05:39.911 --rc geninfo_unexecuted_blocks=1 00:05:39.911 00:05:39.911 ' 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.911 --rc genhtml_branch_coverage=1 00:05:39.911 --rc genhtml_function_coverage=1 00:05:39.911 --rc genhtml_legend=1 00:05:39.911 --rc geninfo_all_blocks=1 00:05:39.911 --rc geninfo_unexecuted_blocks=1 00:05:39.911 00:05:39.911 ' 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.911 --rc genhtml_branch_coverage=1 00:05:39.911 --rc genhtml_function_coverage=1 00:05:39.911 --rc genhtml_legend=1 00:05:39.911 --rc geninfo_all_blocks=1 00:05:39.911 --rc geninfo_unexecuted_blocks=1 00:05:39.911 00:05:39.911 ' 00:05:39.911 09:39:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.911 09:39:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.911 09:39:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.911 09:39:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.911 ************************************ 00:05:39.911 START TEST skip_rpc 00:05:39.911 ************************************ 00:05:39.911 09:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:39.911 09:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57371 00:05:39.911 09:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.911 09:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.911 09:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.170 [2024-10-11 09:39:24.648961] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:40.170 [2024-10-11 09:39:24.649176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:05:40.428 [2024-10-11 09:39:24.814888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.428 [2024-10-11 09:39:24.951809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57371 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57371 ']' 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57371 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57371 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.697 killing process with pid 57371 00:05:45.697 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57371' 00:05:45.698 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57371 00:05:45.698 09:39:29 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57371 00:05:47.615 00:05:47.615 real 0m7.702s 00:05:47.615 user 0m7.191s 00:05:47.615 sys 0m0.416s 00:05:47.615 09:39:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.615 09:39:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.615 ************************************ 00:05:47.615 END TEST skip_rpc 00:05:47.615 ************************************ 00:05:47.874 09:39:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.874 09:39:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.874 09:39:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.874 09:39:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.874 ************************************ 00:05:47.874 START TEST skip_rpc_with_json 00:05:47.874 ************************************ 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57476 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57476 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57476 ']' 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.874 09:39:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.874 [2024-10-11 09:39:32.428886] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:47.874 [2024-10-11 09:39:32.429190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57476 ] 00:05:48.134 [2024-10-11 09:39:32.601377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.134 [2024-10-11 09:39:32.731888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 [2024-10-11 09:39:33.776459] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.512 request: 00:05:49.512 { 00:05:49.512 "trtype": "tcp", 00:05:49.512 "method": "nvmf_get_transports", 00:05:49.512 "req_id": 1 00:05:49.512 } 00:05:49.512 Got JSON-RPC error response 00:05:49.512 response: 00:05:49.512 { 00:05:49.512 "code": -19, 00:05:49.512 "message": "No such device" 00:05:49.512 } 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 [2024-10-11 09:39:33.792620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.512 09:39:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.512 { 00:05:49.512 "subsystems": [ 00:05:49.512 { 00:05:49.512 "subsystem": "fsdev", 00:05:49.512 "config": [ 00:05:49.512 { 00:05:49.512 "method": "fsdev_set_opts", 00:05:49.512 "params": { 00:05:49.512 "fsdev_io_pool_size": 65535, 00:05:49.512 "fsdev_io_cache_size": 256 00:05:49.512 } 00:05:49.512 } 00:05:49.512 ] 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "subsystem": "keyring", 00:05:49.512 "config": [] 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "subsystem": "iobuf", 00:05:49.512 "config": [ 00:05:49.512 { 00:05:49.512 "method": "iobuf_set_options", 00:05:49.512 "params": { 00:05:49.512 "small_pool_count": 8192, 00:05:49.512 "large_pool_count": 1024, 00:05:49.512 "small_bufsize": 8192, 00:05:49.512 "large_bufsize": 135168 00:05:49.512 } 00:05:49.512 } 00:05:49.512 ] 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "subsystem": "sock", 00:05:49.512 "config": [ 00:05:49.512 { 00:05:49.512 "method": "sock_set_default_impl", 00:05:49.512 "params": { 00:05:49.512 "impl_name": "posix" 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "method": "sock_impl_set_options", 00:05:49.512 "params": { 00:05:49.512 "impl_name": "ssl", 00:05:49.512 "recv_buf_size": 4096, 00:05:49.512 "send_buf_size": 4096, 00:05:49.512 "enable_recv_pipe": true, 00:05:49.512 "enable_quickack": false, 00:05:49.512 "enable_placement_id": 0, 00:05:49.512 "enable_zerocopy_send_server": true, 00:05:49.512 "enable_zerocopy_send_client": false, 00:05:49.512 "zerocopy_threshold": 0, 00:05:49.512 "tls_version": 0, 00:05:49.512 "enable_ktls": false 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "method": "sock_impl_set_options", 00:05:49.512 "params": { 00:05:49.512 "impl_name": "posix", 00:05:49.512 "recv_buf_size": 2097152, 00:05:49.512 "send_buf_size": 2097152, 00:05:49.512 "enable_recv_pipe": true, 00:05:49.512 "enable_quickack": false, 00:05:49.512 "enable_placement_id": 0, 00:05:49.512 "enable_zerocopy_send_server": true, 00:05:49.512 "enable_zerocopy_send_client": false, 00:05:49.512 "zerocopy_threshold": 0, 00:05:49.512 "tls_version": 0, 00:05:49.512 "enable_ktls": false 00:05:49.512 } 00:05:49.512 } 00:05:49.512 ] 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "subsystem": "vmd", 00:05:49.512 "config": [] 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "subsystem": "accel", 00:05:49.512 "config": [ 00:05:49.512 { 00:05:49.512 "method": "accel_set_options", 00:05:49.512 "params": { 00:05:49.512 "small_cache_size": 128, 00:05:49.512 "large_cache_size": 16, 00:05:49.512 "task_count": 2048, 00:05:49.512 "sequence_count": 2048, 00:05:49.512 "buf_count": 2048 00:05:49.512 } 00:05:49.512 } 00:05:49.512 ] 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "subsystem": "bdev", 00:05:49.512 "config": [ 00:05:49.512 { 00:05:49.512 "method": "bdev_set_options", 00:05:49.512 "params": { 00:05:49.512 "bdev_io_pool_size": 65535, 00:05:49.512 "bdev_io_cache_size": 256, 00:05:49.512 "bdev_auto_examine": true, 00:05:49.512 "iobuf_small_cache_size": 128, 00:05:49.512 "iobuf_large_cache_size": 16 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "method": "bdev_raid_set_options", 00:05:49.512 "params": { 00:05:49.512 "process_window_size_kb": 1024, 00:05:49.512 "process_max_bandwidth_mb_sec": 0 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "method": "bdev_iscsi_set_options", 00:05:49.512 "params": { 00:05:49.512 "timeout_sec": 30 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "method": "bdev_nvme_set_options", 00:05:49.512 "params": { 00:05:49.512 "action_on_timeout": "none", 00:05:49.512 "timeout_us": 0, 00:05:49.512 "timeout_admin_us": 0, 00:05:49.512 "keep_alive_timeout_ms": 10000, 00:05:49.512 "arbitration_burst": 0, 00:05:49.512 "low_priority_weight": 0, 00:05:49.512 "medium_priority_weight": 0, 00:05:49.512 "high_priority_weight": 0, 00:05:49.512 "nvme_adminq_poll_period_us": 10000, 00:05:49.512 "nvme_ioq_poll_period_us": 0, 00:05:49.512 "io_queue_requests": 0, 00:05:49.512 "delay_cmd_submit": true, 00:05:49.512 "transport_retry_count": 4, 00:05:49.512 "bdev_retry_count": 3, 00:05:49.512 "transport_ack_timeout": 0, 00:05:49.512 "ctrlr_loss_timeout_sec": 0, 00:05:49.512 "reconnect_delay_sec": 0, 00:05:49.512 "fast_io_fail_timeout_sec": 0, 00:05:49.512 "disable_auto_failback": false, 00:05:49.512 "generate_uuids": false, 00:05:49.512 "transport_tos": 0, 00:05:49.512 "nvme_error_stat": false, 00:05:49.512 "rdma_srq_size": 0, 00:05:49.512 "io_path_stat": false, 00:05:49.512 "allow_accel_sequence": false, 00:05:49.512 "rdma_max_cq_size": 0, 00:05:49.512 "rdma_cm_event_timeout_ms": 0, 00:05:49.512 "dhchap_digests": [ 00:05:49.512 "sha256", 00:05:49.512 "sha384", 00:05:49.512 "sha512" 00:05:49.512 ], 00:05:49.512 "dhchap_dhgroups": [ 00:05:49.512 "null", 00:05:49.512 "ffdhe2048", 00:05:49.512 "ffdhe3072", 00:05:49.512 "ffdhe4096", 00:05:49.512 "ffdhe6144", 00:05:49.512 "ffdhe8192" 00:05:49.512 ] 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "method": "bdev_nvme_set_hotplug", 00:05:49.512 "params": { 00:05:49.512 "period_us": 100000, 00:05:49.512 "enable": false 00:05:49.512 } 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "method": "bdev_wait_for_examine" 00:05:49.513 } 00:05:49.513 ] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "scsi", 00:05:49.513 "config": null 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "scheduler", 00:05:49.513 "config": [ 00:05:49.513 { 00:05:49.513 "method": "framework_set_scheduler", 00:05:49.513 "params": { 00:05:49.513 "name": "static" 00:05:49.513 } 00:05:49.513 } 00:05:49.513 ] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "vhost_scsi", 00:05:49.513 "config": [] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "vhost_blk", 00:05:49.513 "config": [] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "ublk", 00:05:49.513 "config": [] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "nbd", 00:05:49.513 "config": [] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "nvmf", 00:05:49.513 "config": [ 00:05:49.513 { 00:05:49.513 "method": "nvmf_set_config", 00:05:49.513 "params": { 00:05:49.513 "discovery_filter": "match_any", 00:05:49.513 "admin_cmd_passthru": { 00:05:49.513 "identify_ctrlr": false 00:05:49.513 }, 00:05:49.513 "dhchap_digests": [ 00:05:49.513 "sha256", 00:05:49.513 "sha384", 00:05:49.513 "sha512" 00:05:49.513 ], 00:05:49.513 "dhchap_dhgroups": [ 00:05:49.513 "null", 00:05:49.513 "ffdhe2048", 00:05:49.513 "ffdhe3072", 00:05:49.513 "ffdhe4096", 00:05:49.513 "ffdhe6144", 00:05:49.513 "ffdhe8192" 00:05:49.513 ] 00:05:49.513 } 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "method": "nvmf_set_max_subsystems", 00:05:49.513 "params": { 00:05:49.513 "max_subsystems": 1024 00:05:49.513 } 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "method": "nvmf_set_crdt", 00:05:49.513 "params": { 00:05:49.513 "crdt1": 0, 00:05:49.513 "crdt2": 0, 00:05:49.513 "crdt3": 0 00:05:49.513 } 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "method": "nvmf_create_transport", 00:05:49.513 "params": { 00:05:49.513 "trtype": "TCP", 00:05:49.513 "max_queue_depth": 128, 00:05:49.513 "max_io_qpairs_per_ctrlr": 127, 00:05:49.513 "in_capsule_data_size": 4096, 00:05:49.513 "max_io_size": 131072, 00:05:49.513 "io_unit_size": 131072, 00:05:49.513 "max_aq_depth": 128, 00:05:49.513 "num_shared_buffers": 511, 00:05:49.513 "buf_cache_size": 4294967295, 00:05:49.513 "dif_insert_or_strip": false, 00:05:49.513 "zcopy": false, 00:05:49.513 "c2h_success": true, 00:05:49.513 "sock_priority": 0, 00:05:49.513 "abort_timeout_sec": 1, 00:05:49.513 "ack_timeout": 0, 00:05:49.513 "data_wr_pool_size": 0 00:05:49.513 } 00:05:49.513 } 00:05:49.513 ] 00:05:49.513 }, 00:05:49.513 { 00:05:49.513 "subsystem": "iscsi", 00:05:49.513 "config": [ 00:05:49.513 { 00:05:49.513 "method": "iscsi_set_options", 00:05:49.513 "params": { 00:05:49.513 "node_base": "iqn.2016-06.io.spdk", 00:05:49.513 "max_sessions": 128, 00:05:49.513 "max_connections_per_session": 2, 00:05:49.513 "max_queue_depth": 64, 00:05:49.513 "default_time2wait": 2, 00:05:49.513 "default_time2retain": 20, 00:05:49.513 "first_burst_length": 8192, 00:05:49.513 "immediate_data": true, 00:05:49.513 "allow_duplicated_isid": false, 00:05:49.513 "error_recovery_level": 0, 00:05:49.513 "nop_timeout": 60, 00:05:49.513 "nop_in_interval": 30, 00:05:49.513 "disable_chap": false, 00:05:49.513 "require_chap": false, 00:05:49.513 "mutual_chap": false, 00:05:49.513 "chap_group": 0, 00:05:49.513 "max_large_datain_per_connection": 64, 00:05:49.513 "max_r2t_per_connection": 4, 00:05:49.513 "pdu_pool_size": 36864, 00:05:49.513 "immediate_data_pool_size": 16384, 00:05:49.513 "data_out_pool_size": 2048 00:05:49.513 } 00:05:49.513 } 00:05:49.513 ] 00:05:49.513 } 00:05:49.513 ] 00:05:49.513 } 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57476 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57476 ']' 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57476 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.513 09:39:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57476 00:05:49.513 killing process with pid 57476 00:05:49.513 09:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.513 09:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.513 09:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57476' 00:05:49.513 09:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57476 00:05:49.513 09:39:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57476 00:05:52.047 09:39:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57532 00:05:52.047 09:39:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.047 09:39:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:57.317 09:39:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57532 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57532 ']' 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57532 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57532 00:05:57.318 killing process with pid 57532 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57532' 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57532 00:05:57.318 09:39:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57532 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.857 ************************************ 00:05:59.857 END TEST skip_rpc_with_json 00:05:59.857 ************************************ 00:05:59.857 00:05:59.857 real 0m11.830s 00:05:59.857 user 0m11.292s 00:05:59.857 sys 0m0.900s 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.857 09:39:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.857 09:39:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.857 09:39:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.857 09:39:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.857 ************************************ 00:05:59.857 START TEST skip_rpc_with_delay 00:05:59.857 ************************************ 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.857 [2024-10-11 09:39:44.322442] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.857 00:05:59.857 real 0m0.196s 00:05:59.857 user 0m0.101s 00:05:59.857 sys 0m0.093s 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.857 ************************************ 00:05:59.857 END TEST skip_rpc_with_delay 00:05:59.857 ************************************ 00:05:59.857 09:39:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.857 09:39:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.857 09:39:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.857 09:39:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.857 09:39:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.857 09:39:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.857 09:39:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.857 ************************************ 00:05:59.857 START TEST exit_on_failed_rpc_init 00:05:59.857 ************************************ 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57671 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57671 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57671 ']' 00:05:59.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.857 09:39:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.118 [2024-10-11 09:39:44.590044] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:00.118 [2024-10-11 09:39:44.590219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57671 ] 00:06:00.378 [2024-10-11 09:39:44.761281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.378 [2024-10-11 09:39:44.903128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.395 09:39:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.653 [2024-10-11 09:39:46.088535] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:01.653 [2024-10-11 09:39:46.089311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:06:01.653 [2024-10-11 09:39:46.260194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.912 [2024-10-11 09:39:46.423333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.912 [2024-10-11 09:39:46.423602] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.912 [2024-10-11 09:39:46.423756] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.912 [2024-10-11 09:39:46.423809] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57671 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57671 ']' 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57671 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.171 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57671 00:06:02.430 killing process with pid 57671 00:06:02.430 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.430 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.430 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57671' 00:06:02.430 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57671 00:06:02.430 09:39:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57671 00:06:04.959 ************************************ 00:06:04.959 END TEST exit_on_failed_rpc_init 00:06:04.959 ************************************ 00:06:04.959 00:06:04.959 real 0m5.099s 00:06:04.959 user 0m5.544s 00:06:04.959 sys 0m0.664s 00:06:04.959 09:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.959 09:39:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.217 09:39:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.217 ************************************ 00:06:05.217 END TEST skip_rpc 00:06:05.217 ************************************ 00:06:05.217 00:06:05.217 real 0m25.305s 00:06:05.217 user 0m24.325s 00:06:05.217 sys 0m2.370s 00:06:05.217 09:39:49 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.217 09:39:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.217 09:39:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.217 09:39:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.217 09:39:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.217 09:39:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.217 ************************************ 00:06:05.217 START TEST rpc_client 00:06:05.217 ************************************ 00:06:05.217 09:39:49 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.217 * Looking for test storage... 00:06:05.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.217 09:39:49 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.217 09:39:49 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.217 09:39:49 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.475 09:39:49 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.475 09:39:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:05.475 09:39:49 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.476 09:39:49 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.476 --rc genhtml_branch_coverage=1 00:06:05.476 --rc genhtml_function_coverage=1 00:06:05.476 --rc genhtml_legend=1 00:06:05.476 --rc geninfo_all_blocks=1 00:06:05.476 --rc geninfo_unexecuted_blocks=1 00:06:05.476 00:06:05.476 ' 00:06:05.476 09:39:49 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.476 --rc genhtml_branch_coverage=1 00:06:05.476 --rc genhtml_function_coverage=1 00:06:05.476 --rc genhtml_legend=1 00:06:05.476 --rc geninfo_all_blocks=1 00:06:05.476 --rc geninfo_unexecuted_blocks=1 00:06:05.476 00:06:05.476 ' 00:06:05.476 09:39:49 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.476 --rc genhtml_branch_coverage=1 00:06:05.476 --rc genhtml_function_coverage=1 00:06:05.476 --rc genhtml_legend=1 00:06:05.476 --rc geninfo_all_blocks=1 00:06:05.476 --rc geninfo_unexecuted_blocks=1 00:06:05.476 00:06:05.476 ' 00:06:05.476 09:39:49 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.476 --rc genhtml_branch_coverage=1 00:06:05.476 --rc genhtml_function_coverage=1 00:06:05.476 --rc genhtml_legend=1 00:06:05.476 --rc geninfo_all_blocks=1 00:06:05.476 --rc geninfo_unexecuted_blocks=1 00:06:05.476 00:06:05.476 ' 00:06:05.476 09:39:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.476 OK 00:06:05.476 09:39:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.476 00:06:05.476 real 0m0.319s 00:06:05.476 user 0m0.171s 00:06:05.476 sys 0m0.163s 00:06:05.476 09:39:50 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.476 09:39:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.476 ************************************ 00:06:05.476 END TEST rpc_client 00:06:05.476 ************************************ 00:06:05.476 09:39:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.476 09:39:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.476 09:39:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.476 09:39:50 -- common/autotest_common.sh@10 -- # set +x 00:06:05.476 ************************************ 00:06:05.476 START TEST json_config 00:06:05.476 ************************************ 00:06:05.476 09:39:50 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.742 09:39:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.742 09:39:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.742 09:39:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.742 09:39:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.742 09:39:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.742 09:39:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:05.742 09:39:50 json_config -- scripts/common.sh@345 -- # : 1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.742 09:39:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.742 09:39:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@353 -- # local d=1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.742 09:39:50 json_config -- scripts/common.sh@355 -- # echo 1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.742 09:39:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@353 -- # local d=2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.742 09:39:50 json_config -- scripts/common.sh@355 -- # echo 2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.742 09:39:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.742 09:39:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.742 09:39:50 json_config -- scripts/common.sh@368 -- # return 0 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.742 --rc genhtml_branch_coverage=1 00:06:05.742 --rc genhtml_function_coverage=1 00:06:05.742 --rc genhtml_legend=1 00:06:05.742 --rc geninfo_all_blocks=1 00:06:05.742 --rc geninfo_unexecuted_blocks=1 00:06:05.742 00:06:05.742 ' 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.742 --rc genhtml_branch_coverage=1 00:06:05.742 --rc genhtml_function_coverage=1 00:06:05.742 --rc genhtml_legend=1 00:06:05.742 --rc geninfo_all_blocks=1 00:06:05.742 --rc geninfo_unexecuted_blocks=1 00:06:05.742 00:06:05.742 ' 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.742 --rc genhtml_branch_coverage=1 00:06:05.742 --rc genhtml_function_coverage=1 00:06:05.742 --rc genhtml_legend=1 00:06:05.742 --rc geninfo_all_blocks=1 00:06:05.742 --rc geninfo_unexecuted_blocks=1 00:06:05.742 00:06:05.742 ' 00:06:05.742 09:39:50 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.742 --rc genhtml_branch_coverage=1 00:06:05.742 --rc genhtml_function_coverage=1 00:06:05.742 --rc genhtml_legend=1 00:06:05.742 --rc geninfo_all_blocks=1 00:06:05.742 --rc geninfo_unexecuted_blocks=1 00:06:05.742 00:06:05.742 ' 00:06:05.742 09:39:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ae20291-71ab-43d0-8891-47a0451aa469 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1ae20291-71ab-43d0-8891-47a0451aa469 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.742 09:39:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.742 09:39:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.742 09:39:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.742 09:39:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.742 09:39:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.742 09:39:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.742 09:39:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.742 09:39:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.742 09:39:50 json_config -- paths/export.sh@5 -- # export PATH 00:06:05.743 09:39:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@51 -- # : 0 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.743 09:39:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.743 WARNING: No tests are enabled so not running JSON configuration tests 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:05.743 09:39:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:05.743 00:06:05.743 real 0m0.215s 00:06:05.743 user 0m0.130s 00:06:05.743 sys 0m0.087s 00:06:05.743 09:39:50 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.743 09:39:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 ************************************ 00:06:05.743 END TEST json_config 00:06:05.743 ************************************ 00:06:05.743 09:39:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:05.743 09:39:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.743 09:39:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.743 09:39:50 -- common/autotest_common.sh@10 -- # set +x 00:06:05.743 ************************************ 00:06:05.743 START TEST json_config_extra_key 00:06:05.743 ************************************ 00:06:05.743 09:39:50 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:06.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.016 --rc genhtml_branch_coverage=1 00:06:06.016 --rc genhtml_function_coverage=1 00:06:06.016 --rc genhtml_legend=1 00:06:06.016 --rc geninfo_all_blocks=1 00:06:06.016 --rc geninfo_unexecuted_blocks=1 00:06:06.016 00:06:06.016 ' 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:06.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.016 --rc genhtml_branch_coverage=1 00:06:06.016 --rc genhtml_function_coverage=1 00:06:06.016 --rc genhtml_legend=1 00:06:06.016 --rc geninfo_all_blocks=1 00:06:06.016 --rc geninfo_unexecuted_blocks=1 00:06:06.016 00:06:06.016 ' 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:06.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.016 --rc genhtml_branch_coverage=1 00:06:06.016 --rc genhtml_function_coverage=1 00:06:06.016 --rc genhtml_legend=1 00:06:06.016 --rc geninfo_all_blocks=1 00:06:06.016 --rc geninfo_unexecuted_blocks=1 00:06:06.016 00:06:06.016 ' 00:06:06.016 09:39:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:06.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.016 --rc genhtml_branch_coverage=1 00:06:06.016 --rc genhtml_function_coverage=1 00:06:06.016 --rc genhtml_legend=1 00:06:06.016 --rc geninfo_all_blocks=1 00:06:06.016 --rc geninfo_unexecuted_blocks=1 00:06:06.016 00:06:06.016 ' 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ae20291-71ab-43d0-8891-47a0451aa469 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1ae20291-71ab-43d0-8891-47a0451aa469 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.016 09:39:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.016 09:39:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.016 09:39:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.016 09:39:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.016 09:39:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.016 09:39:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.016 09:39:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.016 INFO: launching applications... 00:06:06.016 09:39:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.016 09:39:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.016 09:39:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.017 Waiting for target to run... 00:06:06.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57905 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57905 /var/tmp/spdk_tgt.sock 00:06:06.017 09:39:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.017 09:39:50 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57905 ']' 00:06:06.017 09:39:50 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.017 09:39:50 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.017 09:39:50 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.017 09:39:50 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.017 09:39:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.275 [2024-10-11 09:39:50.716041] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:06.275 [2024-10-11 09:39:50.716311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57905 ] 00:06:06.842 [2024-10-11 09:39:51.301280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.842 [2024-10-11 09:39:51.447221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.216 09:39:52 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.216 09:39:52 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.216 00:06:08.216 09:39:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.216 INFO: shutting down applications... 00:06:08.216 09:39:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57905 ]] 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57905 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:08.216 09:39:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.475 09:39:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.475 09:39:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.475 09:39:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:08.475 09:39:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.040 09:39:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.040 09:39:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.040 09:39:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:09.041 09:39:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.299 09:39:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.299 09:39:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.299 09:39:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:09.299 09:39:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.897 09:39:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.897 09:39:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.897 09:39:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:09.897 09:39:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.470 09:39:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.471 09:39:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.471 09:39:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:10.471 09:39:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.040 09:39:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.040 09:39:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.040 09:39:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:11.040 09:39:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.608 09:39:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.608 09:39:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.608 09:39:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:11.608 09:39:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.868 09:39:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.868 SPDK target shutdown done 00:06:11.868 09:39:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.868 Success 00:06:11.868 00:06:11.868 real 0m6.106s 00:06:11.868 user 0m5.187s 00:06:11.868 sys 0m0.857s 00:06:11.868 09:39:56 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.868 09:39:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.868 ************************************ 00:06:11.868 END TEST json_config_extra_key 00:06:11.868 ************************************ 00:06:11.868 09:39:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.868 09:39:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.868 09:39:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.868 09:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.868 ************************************ 00:06:11.868 START TEST alias_rpc 00:06:11.868 ************************************ 00:06:12.127 09:39:56 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.127 * Looking for test storage... 00:06:12.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:12.127 09:39:56 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.127 09:39:56 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.127 09:39:56 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.127 09:39:56 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.127 09:39:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.128 09:39:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.128 --rc genhtml_branch_coverage=1 00:06:12.128 --rc genhtml_function_coverage=1 00:06:12.128 --rc genhtml_legend=1 00:06:12.128 --rc geninfo_all_blocks=1 00:06:12.128 --rc geninfo_unexecuted_blocks=1 00:06:12.128 00:06:12.128 ' 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.128 --rc genhtml_branch_coverage=1 00:06:12.128 --rc genhtml_function_coverage=1 00:06:12.128 --rc genhtml_legend=1 00:06:12.128 --rc geninfo_all_blocks=1 00:06:12.128 --rc geninfo_unexecuted_blocks=1 00:06:12.128 00:06:12.128 ' 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.128 --rc genhtml_branch_coverage=1 00:06:12.128 --rc genhtml_function_coverage=1 00:06:12.128 --rc genhtml_legend=1 00:06:12.128 --rc geninfo_all_blocks=1 00:06:12.128 --rc geninfo_unexecuted_blocks=1 00:06:12.128 00:06:12.128 ' 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.128 --rc genhtml_branch_coverage=1 00:06:12.128 --rc genhtml_function_coverage=1 00:06:12.128 --rc genhtml_legend=1 00:06:12.128 --rc geninfo_all_blocks=1 00:06:12.128 --rc geninfo_unexecuted_blocks=1 00:06:12.128 00:06:12.128 ' 00:06:12.128 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.128 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58035 00:06:12.128 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.128 09:39:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58035 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58035 ']' 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.128 09:39:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.386 [2024-10-11 09:39:56.813187] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:12.386 [2024-10-11 09:39:56.813480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:06:12.386 [2024-10-11 09:39:56.991052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.645 [2024-10-11 09:39:57.170491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.022 09:39:58 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.022 09:39:58 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.022 09:39:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:14.280 09:39:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58035 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58035 ']' 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58035 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58035 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58035' 00:06:14.280 killing process with pid 58035 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@969 -- # kill 58035 00:06:14.280 09:39:58 alias_rpc -- common/autotest_common.sh@974 -- # wait 58035 00:06:17.568 00:06:17.568 real 0m5.380s 00:06:17.568 user 0m5.361s 00:06:17.568 sys 0m0.787s 00:06:17.568 09:40:01 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.568 09:40:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.568 ************************************ 00:06:17.568 END TEST alias_rpc 00:06:17.568 ************************************ 00:06:17.568 09:40:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:17.568 09:40:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:17.568 09:40:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.568 09:40:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.568 09:40:01 -- common/autotest_common.sh@10 -- # set +x 00:06:17.568 ************************************ 00:06:17.568 START TEST spdkcli_tcp 00:06:17.568 ************************************ 00:06:17.568 09:40:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:17.568 * Looking for test storage... 00:06:17.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:17.568 09:40:01 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.568 09:40:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.568 09:40:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.568 09:40:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:17.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.568 --rc genhtml_branch_coverage=1 00:06:17.568 --rc genhtml_function_coverage=1 00:06:17.568 --rc genhtml_legend=1 00:06:17.568 --rc geninfo_all_blocks=1 00:06:17.568 --rc geninfo_unexecuted_blocks=1 00:06:17.568 00:06:17.568 ' 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:17.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.568 --rc genhtml_branch_coverage=1 00:06:17.568 --rc genhtml_function_coverage=1 00:06:17.568 --rc genhtml_legend=1 00:06:17.568 --rc geninfo_all_blocks=1 00:06:17.568 --rc geninfo_unexecuted_blocks=1 00:06:17.568 00:06:17.568 ' 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:17.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.568 --rc genhtml_branch_coverage=1 00:06:17.568 --rc genhtml_function_coverage=1 00:06:17.568 --rc genhtml_legend=1 00:06:17.568 --rc geninfo_all_blocks=1 00:06:17.568 --rc geninfo_unexecuted_blocks=1 00:06:17.568 00:06:17.568 ' 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:17.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.568 --rc genhtml_branch_coverage=1 00:06:17.568 --rc genhtml_function_coverage=1 00:06:17.568 --rc genhtml_legend=1 00:06:17.568 --rc geninfo_all_blocks=1 00:06:17.568 --rc geninfo_unexecuted_blocks=1 00:06:17.568 00:06:17.568 ' 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58153 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58153 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58153 ']' 00:06:17.568 09:40:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.568 09:40:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.827 [2024-10-11 09:40:02.224913] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:17.827 [2024-10-11 09:40:02.225227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58153 ] 00:06:17.827 [2024-10-11 09:40:02.404696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.086 [2024-10-11 09:40:02.581314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.086 [2024-10-11 09:40:02.581326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.462 09:40:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.462 09:40:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:19.462 09:40:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58181 00:06:19.462 09:40:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:19.462 09:40:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:19.722 [ 00:06:19.722 "bdev_malloc_delete", 00:06:19.722 "bdev_malloc_create", 00:06:19.722 "bdev_null_resize", 00:06:19.722 "bdev_null_delete", 00:06:19.722 "bdev_null_create", 00:06:19.722 "bdev_nvme_cuse_unregister", 00:06:19.722 "bdev_nvme_cuse_register", 00:06:19.722 "bdev_opal_new_user", 00:06:19.722 "bdev_opal_set_lock_state", 00:06:19.722 "bdev_opal_delete", 00:06:19.722 "bdev_opal_get_info", 00:06:19.722 "bdev_opal_create", 00:06:19.722 "bdev_nvme_opal_revert", 00:06:19.722 "bdev_nvme_opal_init", 00:06:19.722 "bdev_nvme_send_cmd", 00:06:19.722 "bdev_nvme_set_keys", 00:06:19.722 "bdev_nvme_get_path_iostat", 00:06:19.722 "bdev_nvme_get_mdns_discovery_info", 00:06:19.722 "bdev_nvme_stop_mdns_discovery", 00:06:19.722 "bdev_nvme_start_mdns_discovery", 00:06:19.722 "bdev_nvme_set_multipath_policy", 00:06:19.722 "bdev_nvme_set_preferred_path", 00:06:19.722 "bdev_nvme_get_io_paths", 00:06:19.722 "bdev_nvme_remove_error_injection", 00:06:19.722 "bdev_nvme_add_error_injection", 00:06:19.722 "bdev_nvme_get_discovery_info", 00:06:19.722 "bdev_nvme_stop_discovery", 00:06:19.722 "bdev_nvme_start_discovery", 00:06:19.722 "bdev_nvme_get_controller_health_info", 00:06:19.722 "bdev_nvme_disable_controller", 00:06:19.722 "bdev_nvme_enable_controller", 00:06:19.722 "bdev_nvme_reset_controller", 00:06:19.722 "bdev_nvme_get_transport_statistics", 00:06:19.722 "bdev_nvme_apply_firmware", 00:06:19.722 "bdev_nvme_detach_controller", 00:06:19.722 "bdev_nvme_get_controllers", 00:06:19.722 "bdev_nvme_attach_controller", 00:06:19.722 "bdev_nvme_set_hotplug", 00:06:19.722 "bdev_nvme_set_options", 00:06:19.722 "bdev_passthru_delete", 00:06:19.722 "bdev_passthru_create", 00:06:19.722 "bdev_lvol_set_parent_bdev", 00:06:19.722 "bdev_lvol_set_parent", 00:06:19.722 "bdev_lvol_check_shallow_copy", 00:06:19.722 "bdev_lvol_start_shallow_copy", 00:06:19.722 "bdev_lvol_grow_lvstore", 00:06:19.722 "bdev_lvol_get_lvols", 00:06:19.722 "bdev_lvol_get_lvstores", 00:06:19.722 "bdev_lvol_delete", 00:06:19.722 "bdev_lvol_set_read_only", 00:06:19.722 "bdev_lvol_resize", 00:06:19.722 "bdev_lvol_decouple_parent", 00:06:19.722 "bdev_lvol_inflate", 00:06:19.722 "bdev_lvol_rename", 00:06:19.722 "bdev_lvol_clone_bdev", 00:06:19.722 "bdev_lvol_clone", 00:06:19.722 "bdev_lvol_snapshot", 00:06:19.722 "bdev_lvol_create", 00:06:19.722 "bdev_lvol_delete_lvstore", 00:06:19.722 "bdev_lvol_rename_lvstore", 00:06:19.722 "bdev_lvol_create_lvstore", 00:06:19.722 "bdev_raid_set_options", 00:06:19.722 "bdev_raid_remove_base_bdev", 00:06:19.722 "bdev_raid_add_base_bdev", 00:06:19.722 "bdev_raid_delete", 00:06:19.722 "bdev_raid_create", 00:06:19.722 "bdev_raid_get_bdevs", 00:06:19.722 "bdev_error_inject_error", 00:06:19.722 "bdev_error_delete", 00:06:19.722 "bdev_error_create", 00:06:19.722 "bdev_split_delete", 00:06:19.722 "bdev_split_create", 00:06:19.722 "bdev_delay_delete", 00:06:19.722 "bdev_delay_create", 00:06:19.722 "bdev_delay_update_latency", 00:06:19.722 "bdev_zone_block_delete", 00:06:19.722 "bdev_zone_block_create", 00:06:19.722 "blobfs_create", 00:06:19.722 "blobfs_detect", 00:06:19.722 "blobfs_set_cache_size", 00:06:19.722 "bdev_aio_delete", 00:06:19.722 "bdev_aio_rescan", 00:06:19.722 "bdev_aio_create", 00:06:19.722 "bdev_ftl_set_property", 00:06:19.722 "bdev_ftl_get_properties", 00:06:19.722 "bdev_ftl_get_stats", 00:06:19.722 "bdev_ftl_unmap", 00:06:19.722 "bdev_ftl_unload", 00:06:19.722 "bdev_ftl_delete", 00:06:19.722 "bdev_ftl_load", 00:06:19.722 "bdev_ftl_create", 00:06:19.722 "bdev_virtio_attach_controller", 00:06:19.722 "bdev_virtio_scsi_get_devices", 00:06:19.722 "bdev_virtio_detach_controller", 00:06:19.722 "bdev_virtio_blk_set_hotplug", 00:06:19.722 "bdev_iscsi_delete", 00:06:19.722 "bdev_iscsi_create", 00:06:19.722 "bdev_iscsi_set_options", 00:06:19.722 "accel_error_inject_error", 00:06:19.722 "ioat_scan_accel_module", 00:06:19.722 "dsa_scan_accel_module", 00:06:19.722 "iaa_scan_accel_module", 00:06:19.722 "keyring_file_remove_key", 00:06:19.722 "keyring_file_add_key", 00:06:19.722 "keyring_linux_set_options", 00:06:19.722 "fsdev_aio_delete", 00:06:19.722 "fsdev_aio_create", 00:06:19.722 "iscsi_get_histogram", 00:06:19.722 "iscsi_enable_histogram", 00:06:19.722 "iscsi_set_options", 00:06:19.722 "iscsi_get_auth_groups", 00:06:19.722 "iscsi_auth_group_remove_secret", 00:06:19.722 "iscsi_auth_group_add_secret", 00:06:19.722 "iscsi_delete_auth_group", 00:06:19.722 "iscsi_create_auth_group", 00:06:19.722 "iscsi_set_discovery_auth", 00:06:19.722 "iscsi_get_options", 00:06:19.722 "iscsi_target_node_request_logout", 00:06:19.722 "iscsi_target_node_set_redirect", 00:06:19.722 "iscsi_target_node_set_auth", 00:06:19.722 "iscsi_target_node_add_lun", 00:06:19.722 "iscsi_get_stats", 00:06:19.723 "iscsi_get_connections", 00:06:19.723 "iscsi_portal_group_set_auth", 00:06:19.723 "iscsi_start_portal_group", 00:06:19.723 "iscsi_delete_portal_group", 00:06:19.723 "iscsi_create_portal_group", 00:06:19.723 "iscsi_get_portal_groups", 00:06:19.723 "iscsi_delete_target_node", 00:06:19.723 "iscsi_target_node_remove_pg_ig_maps", 00:06:19.723 "iscsi_target_node_add_pg_ig_maps", 00:06:19.723 "iscsi_create_target_node", 00:06:19.723 "iscsi_get_target_nodes", 00:06:19.723 "iscsi_delete_initiator_group", 00:06:19.723 "iscsi_initiator_group_remove_initiators", 00:06:19.723 "iscsi_initiator_group_add_initiators", 00:06:19.723 "iscsi_create_initiator_group", 00:06:19.723 "iscsi_get_initiator_groups", 00:06:19.723 "nvmf_set_crdt", 00:06:19.723 "nvmf_set_config", 00:06:19.723 "nvmf_set_max_subsystems", 00:06:19.723 "nvmf_stop_mdns_prr", 00:06:19.723 "nvmf_publish_mdns_prr", 00:06:19.723 "nvmf_subsystem_get_listeners", 00:06:19.723 "nvmf_subsystem_get_qpairs", 00:06:19.723 "nvmf_subsystem_get_controllers", 00:06:19.723 "nvmf_get_stats", 00:06:19.723 "nvmf_get_transports", 00:06:19.723 "nvmf_create_transport", 00:06:19.723 "nvmf_get_targets", 00:06:19.723 "nvmf_delete_target", 00:06:19.723 "nvmf_create_target", 00:06:19.723 "nvmf_subsystem_allow_any_host", 00:06:19.723 "nvmf_subsystem_set_keys", 00:06:19.723 "nvmf_subsystem_remove_host", 00:06:19.723 "nvmf_subsystem_add_host", 00:06:19.723 "nvmf_ns_remove_host", 00:06:19.723 "nvmf_ns_add_host", 00:06:19.723 "nvmf_subsystem_remove_ns", 00:06:19.723 "nvmf_subsystem_set_ns_ana_group", 00:06:19.723 "nvmf_subsystem_add_ns", 00:06:19.723 "nvmf_subsystem_listener_set_ana_state", 00:06:19.723 "nvmf_discovery_get_referrals", 00:06:19.723 "nvmf_discovery_remove_referral", 00:06:19.723 "nvmf_discovery_add_referral", 00:06:19.723 "nvmf_subsystem_remove_listener", 00:06:19.723 "nvmf_subsystem_add_listener", 00:06:19.723 "nvmf_delete_subsystem", 00:06:19.723 "nvmf_create_subsystem", 00:06:19.723 "nvmf_get_subsystems", 00:06:19.723 "env_dpdk_get_mem_stats", 00:06:19.723 "nbd_get_disks", 00:06:19.723 "nbd_stop_disk", 00:06:19.723 "nbd_start_disk", 00:06:19.723 "ublk_recover_disk", 00:06:19.723 "ublk_get_disks", 00:06:19.723 "ublk_stop_disk", 00:06:19.723 "ublk_start_disk", 00:06:19.723 "ublk_destroy_target", 00:06:19.723 "ublk_create_target", 00:06:19.723 "virtio_blk_create_transport", 00:06:19.723 "virtio_blk_get_transports", 00:06:19.723 "vhost_controller_set_coalescing", 00:06:19.723 "vhost_get_controllers", 00:06:19.723 "vhost_delete_controller", 00:06:19.723 "vhost_create_blk_controller", 00:06:19.723 "vhost_scsi_controller_remove_target", 00:06:19.723 "vhost_scsi_controller_add_target", 00:06:19.723 "vhost_start_scsi_controller", 00:06:19.723 "vhost_create_scsi_controller", 00:06:19.723 "thread_set_cpumask", 00:06:19.723 "scheduler_set_options", 00:06:19.723 "framework_get_governor", 00:06:19.723 "framework_get_scheduler", 00:06:19.723 "framework_set_scheduler", 00:06:19.723 "framework_get_reactors", 00:06:19.723 "thread_get_io_channels", 00:06:19.723 "thread_get_pollers", 00:06:19.723 "thread_get_stats", 00:06:19.723 "framework_monitor_context_switch", 00:06:19.723 "spdk_kill_instance", 00:06:19.723 "log_enable_timestamps", 00:06:19.723 "log_get_flags", 00:06:19.723 "log_clear_flag", 00:06:19.723 "log_set_flag", 00:06:19.723 "log_get_level", 00:06:19.723 "log_set_level", 00:06:19.723 "log_get_print_level", 00:06:19.723 "log_set_print_level", 00:06:19.723 "framework_enable_cpumask_locks", 00:06:19.723 "framework_disable_cpumask_locks", 00:06:19.723 "framework_wait_init", 00:06:19.723 "framework_start_init", 00:06:19.723 "scsi_get_devices", 00:06:19.723 "bdev_get_histogram", 00:06:19.723 "bdev_enable_histogram", 00:06:19.723 "bdev_set_qos_limit", 00:06:19.723 "bdev_set_qd_sampling_period", 00:06:19.723 "bdev_get_bdevs", 00:06:19.723 "bdev_reset_iostat", 00:06:19.723 "bdev_get_iostat", 00:06:19.723 "bdev_examine", 00:06:19.723 "bdev_wait_for_examine", 00:06:19.723 "bdev_set_options", 00:06:19.723 "accel_get_stats", 00:06:19.723 "accel_set_options", 00:06:19.723 "accel_set_driver", 00:06:19.723 "accel_crypto_key_destroy", 00:06:19.723 "accel_crypto_keys_get", 00:06:19.723 "accel_crypto_key_create", 00:06:19.723 "accel_assign_opc", 00:06:19.723 "accel_get_module_info", 00:06:19.723 "accel_get_opc_assignments", 00:06:19.723 "vmd_rescan", 00:06:19.723 "vmd_remove_device", 00:06:19.723 "vmd_enable", 00:06:19.723 "sock_get_default_impl", 00:06:19.723 "sock_set_default_impl", 00:06:19.723 "sock_impl_set_options", 00:06:19.723 "sock_impl_get_options", 00:06:19.723 "iobuf_get_stats", 00:06:19.723 "iobuf_set_options", 00:06:19.723 "keyring_get_keys", 00:06:19.723 "framework_get_pci_devices", 00:06:19.723 "framework_get_config", 00:06:19.723 "framework_get_subsystems", 00:06:19.723 "fsdev_set_opts", 00:06:19.723 "fsdev_get_opts", 00:06:19.723 "trace_get_info", 00:06:19.723 "trace_get_tpoint_group_mask", 00:06:19.723 "trace_disable_tpoint_group", 00:06:19.723 "trace_enable_tpoint_group", 00:06:19.723 "trace_clear_tpoint_mask", 00:06:19.723 "trace_set_tpoint_mask", 00:06:19.723 "notify_get_notifications", 00:06:19.723 "notify_get_types", 00:06:19.723 "spdk_get_version", 00:06:19.723 "rpc_get_methods" 00:06:19.723 ] 00:06:19.723 09:40:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.723 09:40:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:19.723 09:40:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58153 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58153 ']' 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58153 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58153 00:06:19.723 killing process with pid 58153 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58153' 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58153 00:06:19.723 09:40:04 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58153 00:06:23.020 ************************************ 00:06:23.020 END TEST spdkcli_tcp 00:06:23.020 ************************************ 00:06:23.020 00:06:23.020 real 0m5.368s 00:06:23.020 user 0m9.669s 00:06:23.020 sys 0m0.815s 00:06:23.020 09:40:07 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.020 09:40:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.020 09:40:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.020 09:40:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.020 09:40:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.020 09:40:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.020 ************************************ 00:06:23.020 START TEST dpdk_mem_utility 00:06:23.020 ************************************ 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.020 * Looking for test storage... 00:06:23.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.020 09:40:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.020 --rc genhtml_branch_coverage=1 00:06:23.020 --rc genhtml_function_coverage=1 00:06:23.020 --rc genhtml_legend=1 00:06:23.020 --rc geninfo_all_blocks=1 00:06:23.020 --rc geninfo_unexecuted_blocks=1 00:06:23.020 00:06:23.020 ' 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.020 --rc genhtml_branch_coverage=1 00:06:23.020 --rc genhtml_function_coverage=1 00:06:23.020 --rc genhtml_legend=1 00:06:23.020 --rc geninfo_all_blocks=1 00:06:23.020 --rc geninfo_unexecuted_blocks=1 00:06:23.020 00:06:23.020 ' 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.020 --rc genhtml_branch_coverage=1 00:06:23.020 --rc genhtml_function_coverage=1 00:06:23.020 --rc genhtml_legend=1 00:06:23.020 --rc geninfo_all_blocks=1 00:06:23.020 --rc geninfo_unexecuted_blocks=1 00:06:23.020 00:06:23.020 ' 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.020 --rc genhtml_branch_coverage=1 00:06:23.020 --rc genhtml_function_coverage=1 00:06:23.020 --rc genhtml_legend=1 00:06:23.020 --rc geninfo_all_blocks=1 00:06:23.020 --rc geninfo_unexecuted_blocks=1 00:06:23.020 00:06:23.020 ' 00:06:23.020 09:40:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:23.020 09:40:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58290 00:06:23.020 09:40:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.020 09:40:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58290 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58290 ']' 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.020 09:40:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.020 [2024-10-11 09:40:07.649948] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:23.020 [2024-10-11 09:40:07.650364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:06:23.278 [2024-10-11 09:40:07.832205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.566 [2024-10-11 09:40:08.027119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.946 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.946 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:24.946 09:40:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:24.946 09:40:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:24.946 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.946 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.946 { 00:06:24.946 "filename": "/tmp/spdk_mem_dump.txt" 00:06:24.946 } 00:06:24.946 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.946 09:40:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:24.946 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:24.946 1 heaps totaling size 816.000000 MiB 00:06:24.946 size: 816.000000 MiB heap id: 0 00:06:24.946 end heaps---------- 00:06:24.946 9 mempools totaling size 595.772034 MiB 00:06:24.946 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:24.946 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:24.946 size: 92.545471 MiB name: bdev_io_58290 00:06:24.946 size: 50.003479 MiB name: msgpool_58290 00:06:24.946 size: 36.509338 MiB name: fsdev_io_58290 00:06:24.946 size: 21.763794 MiB name: PDU_Pool 00:06:24.946 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:24.946 size: 4.133484 MiB name: evtpool_58290 00:06:24.946 size: 0.026123 MiB name: Session_Pool 00:06:24.946 end mempools------- 00:06:24.946 6 memzones totaling size 4.142822 MiB 00:06:24.946 size: 1.000366 MiB name: RG_ring_0_58290 00:06:24.946 size: 1.000366 MiB name: RG_ring_1_58290 00:06:24.946 size: 1.000366 MiB name: RG_ring_4_58290 00:06:24.946 size: 1.000366 MiB name: RG_ring_5_58290 00:06:24.946 size: 0.125366 MiB name: RG_ring_2_58290 00:06:24.946 size: 0.015991 MiB name: RG_ring_3_58290 00:06:24.946 end memzones------- 00:06:24.946 09:40:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:24.946 heap id: 0 total size: 816.000000 MiB number of busy elements: 306 number of free elements: 18 00:06:24.946 list of free elements. size: 16.793579 MiB 00:06:24.946 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:24.946 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:24.946 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:24.946 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:24.946 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:24.946 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:24.946 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:24.946 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:24.946 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:24.946 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:24.946 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:24.946 element at address: 0x20001ac00000 with size: 0.564148 MiB 00:06:24.946 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:24.946 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:24.946 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:24.946 element at address: 0x200012c00000 with size: 0.443237 MiB 00:06:24.946 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:24.946 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:24.946 list of standard malloc elements. size: 199.285522 MiB 00:06:24.946 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:24.946 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:24.946 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:24.946 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:24.946 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:24.946 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:24.946 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:24.946 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:24.946 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:24.946 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:24.946 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:24.946 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:24.946 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71780 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:24.947 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:24.948 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:24.948 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:24.948 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:24.948 list of memzone associated elements. size: 599.920898 MiB 00:06:24.948 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:24.948 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:24.948 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:24.948 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:24.948 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:24.948 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58290_0 00:06:24.948 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:24.948 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58290_0 00:06:24.948 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:24.948 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58290_0 00:06:24.948 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:24.948 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:24.948 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:24.948 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:24.948 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:24.948 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58290_0 00:06:24.948 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:24.948 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58290 00:06:24.948 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:24.948 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58290 00:06:24.948 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:24.948 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:24.948 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:24.948 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:24.948 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:24.948 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:24.948 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:24.949 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:24.949 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:24.949 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58290 00:06:24.949 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:24.949 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58290 00:06:24.949 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:24.949 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58290 00:06:24.949 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:24.949 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58290 00:06:24.949 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:24.949 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58290 00:06:24.949 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:24.949 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58290 00:06:24.949 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:24.949 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:24.949 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:24.949 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:24.949 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:24.949 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:24.949 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:24.949 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58290 00:06:24.949 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:24.949 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58290 00:06:24.949 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:24.949 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:24.949 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:24.949 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:24.949 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:24.949 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58290 00:06:24.949 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:24.949 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:24.949 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:24.949 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58290 00:06:24.949 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:24.949 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58290 00:06:24.949 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:24.949 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58290 00:06:24.949 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:24.949 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:24.949 09:40:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:24.949 09:40:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58290 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58290 ']' 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58290 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58290 00:06:24.949 killing process with pid 58290 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58290' 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58290 00:06:24.949 09:40:09 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58290 00:06:28.246 00:06:28.246 real 0m5.300s 00:06:28.246 user 0m5.311s 00:06:28.246 sys 0m0.791s 00:06:28.246 09:40:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.246 09:40:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.246 ************************************ 00:06:28.246 END TEST dpdk_mem_utility 00:06:28.246 ************************************ 00:06:28.246 09:40:12 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.246 09:40:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.246 09:40:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.246 09:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.246 ************************************ 00:06:28.246 START TEST event 00:06:28.246 ************************************ 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.246 * Looking for test storage... 00:06:28.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.246 09:40:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.246 09:40:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.246 09:40:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.246 09:40:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.246 09:40:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.246 09:40:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.246 09:40:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.246 09:40:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.246 09:40:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.246 09:40:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.246 09:40:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.246 09:40:12 event -- scripts/common.sh@344 -- # case "$op" in 00:06:28.246 09:40:12 event -- scripts/common.sh@345 -- # : 1 00:06:28.246 09:40:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.246 09:40:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.246 09:40:12 event -- scripts/common.sh@365 -- # decimal 1 00:06:28.246 09:40:12 event -- scripts/common.sh@353 -- # local d=1 00:06:28.246 09:40:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.246 09:40:12 event -- scripts/common.sh@355 -- # echo 1 00:06:28.246 09:40:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.246 09:40:12 event -- scripts/common.sh@366 -- # decimal 2 00:06:28.246 09:40:12 event -- scripts/common.sh@353 -- # local d=2 00:06:28.246 09:40:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.246 09:40:12 event -- scripts/common.sh@355 -- # echo 2 00:06:28.246 09:40:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.246 09:40:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.246 09:40:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.246 09:40:12 event -- scripts/common.sh@368 -- # return 0 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.246 --rc genhtml_branch_coverage=1 00:06:28.246 --rc genhtml_function_coverage=1 00:06:28.246 --rc genhtml_legend=1 00:06:28.246 --rc geninfo_all_blocks=1 00:06:28.246 --rc geninfo_unexecuted_blocks=1 00:06:28.246 00:06:28.246 ' 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.246 --rc genhtml_branch_coverage=1 00:06:28.246 --rc genhtml_function_coverage=1 00:06:28.246 --rc genhtml_legend=1 00:06:28.246 --rc geninfo_all_blocks=1 00:06:28.246 --rc geninfo_unexecuted_blocks=1 00:06:28.246 00:06:28.246 ' 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.246 --rc genhtml_branch_coverage=1 00:06:28.246 --rc genhtml_function_coverage=1 00:06:28.246 --rc genhtml_legend=1 00:06:28.246 --rc geninfo_all_blocks=1 00:06:28.246 --rc geninfo_unexecuted_blocks=1 00:06:28.246 00:06:28.246 ' 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.246 --rc genhtml_branch_coverage=1 00:06:28.246 --rc genhtml_function_coverage=1 00:06:28.246 --rc genhtml_legend=1 00:06:28.246 --rc geninfo_all_blocks=1 00:06:28.246 --rc geninfo_unexecuted_blocks=1 00:06:28.246 00:06:28.246 ' 00:06:28.246 09:40:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.246 09:40:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.246 09:40:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:28.246 09:40:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.246 09:40:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.246 ************************************ 00:06:28.246 START TEST event_perf 00:06:28.246 ************************************ 00:06:28.246 09:40:12 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.506 Running I/O for 1 seconds...[2024-10-11 09:40:12.897768] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:28.506 [2024-10-11 09:40:12.898035] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58416 ] 00:06:28.506 [2024-10-11 09:40:13.076612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.765 [2024-10-11 09:40:13.281057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.765 [2024-10-11 09:40:13.281138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.765 [2024-10-11 09:40:13.281209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.765 [2024-10-11 09:40:13.281222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.143 Running I/O for 1 seconds... 00:06:30.143 lcore 0: 93496 00:06:30.143 lcore 1: 93499 00:06:30.143 lcore 2: 93490 00:06:30.143 lcore 3: 93492 00:06:30.143 done. 00:06:30.143 00:06:30.143 real 0m1.774s 00:06:30.143 user 0m4.483s 00:06:30.143 sys 0m0.153s 00:06:30.143 09:40:14 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.143 09:40:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.143 ************************************ 00:06:30.143 END TEST event_perf 00:06:30.143 ************************************ 00:06:30.143 09:40:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.143 09:40:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:30.143 09:40:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.143 09:40:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.143 ************************************ 00:06:30.143 START TEST event_reactor 00:06:30.143 ************************************ 00:06:30.143 09:40:14 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.143 [2024-10-11 09:40:14.720466] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:30.143 [2024-10-11 09:40:14.720861] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58450 ] 00:06:30.412 [2024-10-11 09:40:14.916963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.686 [2024-10-11 09:40:15.085122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.061 test_start 00:06:32.061 oneshot 00:06:32.061 tick 100 00:06:32.061 tick 100 00:06:32.061 tick 250 00:06:32.061 tick 100 00:06:32.061 tick 100 00:06:32.061 tick 100 00:06:32.061 tick 250 00:06:32.061 tick 500 00:06:32.061 tick 100 00:06:32.061 tick 100 00:06:32.061 tick 250 00:06:32.061 tick 100 00:06:32.061 tick 100 00:06:32.061 test_end 00:06:32.061 00:06:32.061 real 0m1.708s 00:06:32.061 user 0m1.470s 00:06:32.061 sys 0m0.125s 00:06:32.061 09:40:16 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.061 ************************************ 00:06:32.061 END TEST event_reactor 00:06:32.061 ************************************ 00:06:32.061 09:40:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.061 09:40:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.061 09:40:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:32.061 09:40:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.061 09:40:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.061 ************************************ 00:06:32.061 START TEST event_reactor_perf 00:06:32.061 ************************************ 00:06:32.061 09:40:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.061 [2024-10-11 09:40:16.488652] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:32.061 [2024-10-11 09:40:16.488873] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58492 ] 00:06:32.061 [2024-10-11 09:40:16.660566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.319 [2024-10-11 09:40:16.824415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.695 test_start 00:06:33.696 test_end 00:06:33.696 Performance: 329060 events per second 00:06:33.696 ************************************ 00:06:33.696 END TEST event_reactor_perf 00:06:33.696 ************************************ 00:06:33.696 00:06:33.696 real 0m1.697s 00:06:33.696 user 0m1.460s 00:06:33.696 sys 0m0.125s 00:06:33.696 09:40:18 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.696 09:40:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.696 09:40:18 event -- event/event.sh@49 -- # uname -s 00:06:33.696 09:40:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.696 09:40:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.696 09:40:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.696 09:40:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.696 09:40:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.696 ************************************ 00:06:33.696 START TEST event_scheduler 00:06:33.696 ************************************ 00:06:33.696 09:40:18 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.696 * Looking for test storage... 00:06:34.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:34.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.001 09:40:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.001 --rc genhtml_branch_coverage=1 00:06:34.001 --rc genhtml_function_coverage=1 00:06:34.001 --rc genhtml_legend=1 00:06:34.001 --rc geninfo_all_blocks=1 00:06:34.001 --rc geninfo_unexecuted_blocks=1 00:06:34.001 00:06:34.001 ' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.001 --rc genhtml_branch_coverage=1 00:06:34.001 --rc genhtml_function_coverage=1 00:06:34.001 --rc genhtml_legend=1 00:06:34.001 --rc geninfo_all_blocks=1 00:06:34.001 --rc geninfo_unexecuted_blocks=1 00:06:34.001 00:06:34.001 ' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.001 --rc genhtml_branch_coverage=1 00:06:34.001 --rc genhtml_function_coverage=1 00:06:34.001 --rc genhtml_legend=1 00:06:34.001 --rc geninfo_all_blocks=1 00:06:34.001 --rc geninfo_unexecuted_blocks=1 00:06:34.001 00:06:34.001 ' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.001 --rc genhtml_branch_coverage=1 00:06:34.001 --rc genhtml_function_coverage=1 00:06:34.001 --rc genhtml_legend=1 00:06:34.001 --rc geninfo_all_blocks=1 00:06:34.001 --rc geninfo_unexecuted_blocks=1 00:06:34.001 00:06:34.001 ' 00:06:34.001 09:40:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.001 09:40:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58568 00:06:34.001 09:40:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.001 09:40:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58568 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58568 ']' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.001 09:40:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.001 09:40:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.001 [2024-10-11 09:40:18.568568] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:34.001 [2024-10-11 09:40:18.569748] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58568 ] 00:06:34.261 [2024-10-11 09:40:18.745435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.520 [2024-10-11 09:40:18.936183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.520 [2024-10-11 09:40:18.936246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.520 [2024-10-11 09:40:18.936312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.520 [2024-10-11 09:40:18.936323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:35.088 09:40:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.088 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.088 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.088 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.088 POWER: Cannot set governor of lcore 0 to performance 00:06:35.088 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.088 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.088 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.088 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.088 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:35.088 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:35.088 POWER: Unable to set Power Management Environment for lcore 0 00:06:35.088 [2024-10-11 09:40:19.558929] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:35.088 [2024-10-11 09:40:19.559088] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:35.088 [2024-10-11 09:40:19.559185] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:35.088 [2024-10-11 09:40:19.559247] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.088 [2024-10-11 09:40:19.559386] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.088 [2024-10-11 09:40:19.559477] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.088 09:40:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.088 09:40:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 [2024-10-11 09:40:20.045350] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.656 09:40:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.656 09:40:20 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.656 09:40:20 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 ************************************ 00:06:35.656 START TEST scheduler_create_thread 00:06:35.656 ************************************ 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 2 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 3 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 4 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 5 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 6 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 7 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 8 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 9 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 10 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.656 09:40:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.033 09:40:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.033 09:40:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:37.033 09:40:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:37.033 09:40:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.033 09:40:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.968 09:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.968 09:40:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:37.968 09:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.968 09:40:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.534 09:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.534 09:40:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:38.534 09:40:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:38.534 09:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.534 09:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 ************************************ 00:06:39.468 END TEST scheduler_create_thread 00:06:39.468 ************************************ 00:06:39.468 09:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.468 00:06:39.468 real 0m3.788s 00:06:39.468 user 0m0.030s 00:06:39.468 sys 0m0.006s 00:06:39.468 09:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.468 09:40:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 09:40:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:39.468 09:40:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58568 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58568 ']' 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58568 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58568 00:06:39.468 killing process with pid 58568 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58568' 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58568 00:06:39.468 09:40:23 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58568 00:06:39.726 [2024-10-11 09:40:24.122698] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:41.100 00:06:41.100 real 0m7.463s 00:06:41.100 user 0m17.323s 00:06:41.100 sys 0m0.661s 00:06:41.100 09:40:25 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.100 09:40:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.100 ************************************ 00:06:41.100 END TEST event_scheduler 00:06:41.100 ************************************ 00:06:41.100 09:40:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:41.100 09:40:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:41.100 09:40:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.100 09:40:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.100 09:40:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.100 ************************************ 00:06:41.100 START TEST app_repeat 00:06:41.100 ************************************ 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58696 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58696' 00:06:41.360 Process app_repeat pid: 58696 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:41.360 spdk_app_start Round 0 00:06:41.360 09:40:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58696 /var/tmp/spdk-nbd.sock 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58696 ']' 00:06:41.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.360 09:40:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.360 [2024-10-11 09:40:25.808817] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:41.360 [2024-10-11 09:40:25.809101] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58696 ] 00:06:41.360 [2024-10-11 09:40:25.969792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.627 [2024-10-11 09:40:26.141127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.627 [2024-10-11 09:40:26.141163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.562 09:40:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.562 09:40:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:42.562 09:40:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.562 Malloc0 00:06:42.562 09:40:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.130 Malloc1 00:06:43.130 09:40:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.130 09:40:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.389 /dev/nbd0 00:06:43.389 09:40:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.389 09:40:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.389 1+0 records in 00:06:43.389 1+0 records out 00:06:43.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341506 s, 12.0 MB/s 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.389 09:40:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.389 09:40:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.389 09:40:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.389 09:40:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.647 /dev/nbd1 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.647 1+0 records in 00:06:43.647 1+0 records out 00:06:43.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408491 s, 10.0 MB/s 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.647 09:40:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.647 09:40:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.906 09:40:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.906 { 00:06:43.906 "nbd_device": "/dev/nbd0", 00:06:43.906 "bdev_name": "Malloc0" 00:06:43.906 }, 00:06:43.906 { 00:06:43.906 "nbd_device": "/dev/nbd1", 00:06:43.906 "bdev_name": "Malloc1" 00:06:43.906 } 00:06:43.906 ]' 00:06:43.906 09:40:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.906 09:40:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.906 { 00:06:43.906 "nbd_device": "/dev/nbd0", 00:06:43.906 "bdev_name": "Malloc0" 00:06:43.906 }, 00:06:43.906 { 00:06:43.906 "nbd_device": "/dev/nbd1", 00:06:43.906 "bdev_name": "Malloc1" 00:06:43.906 } 00:06:43.906 ]' 00:06:43.906 09:40:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.906 /dev/nbd1' 00:06:43.906 09:40:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.906 /dev/nbd1' 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.907 09:40:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.165 256+0 records in 00:06:44.165 256+0 records out 00:06:44.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0056032 s, 187 MB/s 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.165 256+0 records in 00:06:44.165 256+0 records out 00:06:44.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303575 s, 34.5 MB/s 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.165 256+0 records in 00:06:44.165 256+0 records out 00:06:44.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352381 s, 29.8 MB/s 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.165 09:40:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.425 09:40:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.684 09:40:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.943 09:40:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.943 09:40:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.513 09:40:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.449 [2024-10-11 09:40:31.045448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.708 [2024-10-11 09:40:31.174378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.708 [2024-10-11 09:40:31.174379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.968 [2024-10-11 09:40:31.401631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.968 [2024-10-11 09:40:31.401750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.350 spdk_app_start Round 1 00:06:48.350 09:40:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.350 09:40:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:48.350 09:40:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58696 /var/tmp/spdk-nbd.sock 00:06:48.350 09:40:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58696 ']' 00:06:48.350 09:40:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.350 09:40:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.350 09:40:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.350 09:40:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.350 09:40:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.626 09:40:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.626 09:40:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:48.626 09:40:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.890 Malloc0 00:06:49.147 09:40:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.405 Malloc1 00:06:49.405 09:40:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.405 09:40:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.406 09:40:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.665 /dev/nbd0 00:06:49.665 09:40:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.665 09:40:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.665 1+0 records in 00:06:49.665 1+0 records out 00:06:49.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320255 s, 12.8 MB/s 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.665 09:40:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.665 09:40:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.665 09:40:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.665 09:40:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.925 /dev/nbd1 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.925 1+0 records in 00:06:49.925 1+0 records out 00:06:49.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340261 s, 12.0 MB/s 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.925 09:40:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.925 09:40:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.185 09:40:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.185 { 00:06:50.185 "nbd_device": "/dev/nbd0", 00:06:50.185 "bdev_name": "Malloc0" 00:06:50.185 }, 00:06:50.185 { 00:06:50.185 "nbd_device": "/dev/nbd1", 00:06:50.185 "bdev_name": "Malloc1" 00:06:50.185 } 00:06:50.185 ]' 00:06:50.185 09:40:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.185 09:40:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.185 { 00:06:50.185 "nbd_device": "/dev/nbd0", 00:06:50.185 "bdev_name": "Malloc0" 00:06:50.185 }, 00:06:50.185 { 00:06:50.185 "nbd_device": "/dev/nbd1", 00:06:50.185 "bdev_name": "Malloc1" 00:06:50.185 } 00:06:50.185 ]' 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.445 /dev/nbd1' 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.445 /dev/nbd1' 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.445 09:40:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.446 256+0 records in 00:06:50.446 256+0 records out 00:06:50.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142757 s, 73.5 MB/s 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.446 256+0 records in 00:06:50.446 256+0 records out 00:06:50.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022208 s, 47.2 MB/s 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.446 256+0 records in 00:06:50.446 256+0 records out 00:06:50.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319198 s, 32.9 MB/s 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.446 09:40:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.706 09:40:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.964 09:40:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.223 09:40:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.223 09:40:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.792 09:40:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.172 [2024-10-11 09:40:37.478078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.172 [2024-10-11 09:40:37.606419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.172 [2024-10-11 09:40:37.606450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.432 [2024-10-11 09:40:37.837061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.432 [2024-10-11 09:40:37.837149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.809 spdk_app_start Round 2 00:06:54.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.809 09:40:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.809 09:40:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:54.809 09:40:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58696 /var/tmp/spdk-nbd.sock 00:06:54.809 09:40:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58696 ']' 00:06:54.809 09:40:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.809 09:40:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.809 09:40:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.809 09:40:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.809 09:40:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.068 09:40:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.068 09:40:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:55.068 09:40:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.327 Malloc0 00:06:55.327 09:40:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.587 Malloc1 00:06:55.587 09:40:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.587 09:40:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.846 /dev/nbd0 00:06:55.847 09:40:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.847 09:40:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.847 1+0 records in 00:06:55.847 1+0 records out 00:06:55.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279954 s, 14.6 MB/s 00:06:55.847 09:40:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.105 09:40:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.105 09:40:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.105 09:40:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.105 09:40:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.105 09:40:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.105 09:40:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.105 09:40:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.364 /dev/nbd1 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.364 1+0 records in 00:06:56.364 1+0 records out 00:06:56.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541057 s, 7.6 MB/s 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.364 09:40:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.364 09:40:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.623 { 00:06:56.623 "nbd_device": "/dev/nbd0", 00:06:56.623 "bdev_name": "Malloc0" 00:06:56.623 }, 00:06:56.623 { 00:06:56.623 "nbd_device": "/dev/nbd1", 00:06:56.623 "bdev_name": "Malloc1" 00:06:56.623 } 00:06:56.623 ]' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.623 { 00:06:56.623 "nbd_device": "/dev/nbd0", 00:06:56.623 "bdev_name": "Malloc0" 00:06:56.623 }, 00:06:56.623 { 00:06:56.623 "nbd_device": "/dev/nbd1", 00:06:56.623 "bdev_name": "Malloc1" 00:06:56.623 } 00:06:56.623 ]' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.623 /dev/nbd1' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.623 /dev/nbd1' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.623 256+0 records in 00:06:56.623 256+0 records out 00:06:56.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627104 s, 167 MB/s 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.623 256+0 records in 00:06:56.623 256+0 records out 00:06:56.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026081 s, 40.2 MB/s 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.623 256+0 records in 00:06:56.623 256+0 records out 00:06:56.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291624 s, 36.0 MB/s 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.623 09:40:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.624 09:40:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.624 09:40:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.624 09:40:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.883 09:40:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.142 09:40:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.401 09:40:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.401 09:40:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.401 09:40:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.401 09:40:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.659 09:40:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.659 09:40:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.659 09:40:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.660 09:40:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.660 09:40:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.660 09:40:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.660 09:40:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.660 09:40:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.660 09:40:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.918 09:40:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.306 [2024-10-11 09:40:43.691489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.306 [2024-10-11 09:40:43.822616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.306 [2024-10-11 09:40:43.822617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.565 [2024-10-11 09:40:44.058695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.565 [2024-10-11 09:40:44.058794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.944 09:40:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58696 /var/tmp/spdk-nbd.sock 00:07:00.944 09:40:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58696 ']' 00:07:00.944 09:40:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.944 09:40:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.944 09:40:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.944 09:40:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.944 09:40:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.210 09:40:45 event.app_repeat -- event/event.sh@39 -- # killprocess 58696 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58696 ']' 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58696 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58696 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.210 killing process with pid 58696 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58696' 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58696 00:07:01.210 09:40:45 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58696 00:07:02.596 spdk_app_start is called in Round 0. 00:07:02.596 Shutdown signal received, stop current app iteration 00:07:02.596 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:07:02.596 spdk_app_start is called in Round 1. 00:07:02.596 Shutdown signal received, stop current app iteration 00:07:02.596 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:07:02.596 spdk_app_start is called in Round 2. 00:07:02.596 Shutdown signal received, stop current app iteration 00:07:02.596 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:07:02.596 spdk_app_start is called in Round 3. 00:07:02.596 Shutdown signal received, stop current app iteration 00:07:02.596 09:40:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:02.596 09:40:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:02.596 00:07:02.596 real 0m21.232s 00:07:02.596 user 0m46.532s 00:07:02.596 sys 0m2.802s 00:07:02.596 09:40:46 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.596 09:40:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.596 ************************************ 00:07:02.596 END TEST app_repeat 00:07:02.596 ************************************ 00:07:02.596 09:40:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:02.596 09:40:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:02.596 09:40:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.596 09:40:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.596 09:40:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.596 ************************************ 00:07:02.596 START TEST cpu_locks 00:07:02.596 ************************************ 00:07:02.596 09:40:47 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:02.596 * Looking for test storage... 00:07:02.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:02.596 09:40:47 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.596 09:40:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.596 09:40:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.856 09:40:47 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.856 09:40:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:02.856 09:40:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.856 09:40:47 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.856 --rc genhtml_branch_coverage=1 00:07:02.856 --rc genhtml_function_coverage=1 00:07:02.856 --rc genhtml_legend=1 00:07:02.856 --rc geninfo_all_blocks=1 00:07:02.857 --rc geninfo_unexecuted_blocks=1 00:07:02.857 00:07:02.857 ' 00:07:02.857 09:40:47 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.857 --rc genhtml_branch_coverage=1 00:07:02.857 --rc genhtml_function_coverage=1 00:07:02.857 --rc genhtml_legend=1 00:07:02.857 --rc geninfo_all_blocks=1 00:07:02.857 --rc geninfo_unexecuted_blocks=1 00:07:02.857 00:07:02.857 ' 00:07:02.857 09:40:47 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.857 --rc genhtml_branch_coverage=1 00:07:02.857 --rc genhtml_function_coverage=1 00:07:02.857 --rc genhtml_legend=1 00:07:02.857 --rc geninfo_all_blocks=1 00:07:02.857 --rc geninfo_unexecuted_blocks=1 00:07:02.857 00:07:02.857 ' 00:07:02.857 09:40:47 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.857 --rc genhtml_branch_coverage=1 00:07:02.857 --rc genhtml_function_coverage=1 00:07:02.857 --rc genhtml_legend=1 00:07:02.857 --rc geninfo_all_blocks=1 00:07:02.857 --rc geninfo_unexecuted_blocks=1 00:07:02.857 00:07:02.857 ' 00:07:02.857 09:40:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:02.857 09:40:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:02.857 09:40:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:02.857 09:40:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:02.857 09:40:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.857 09:40:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.857 09:40:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.857 ************************************ 00:07:02.857 START TEST default_locks 00:07:02.857 ************************************ 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59167 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59167 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59167 ']' 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.857 09:40:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.857 [2024-10-11 09:40:47.396677] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:02.857 [2024-10-11 09:40:47.396854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59167 ] 00:07:03.117 [2024-10-11 09:40:47.566215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.117 [2024-10-11 09:40:47.693210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.089 09:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.089 09:40:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:04.089 09:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59167 00:07:04.089 09:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59167 00:07:04.089 09:40:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59167 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59167 ']' 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59167 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59167 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.664 killing process with pid 59167 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59167' 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59167 00:07:04.664 09:40:49 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59167 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59167 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59167 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59167 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59167 ']' 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.985 ERROR: process (pid: 59167) is no longer running 00:07:07.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59167) - No such process 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.985 00:07:07.985 real 0m4.830s 00:07:07.985 user 0m4.794s 00:07:07.985 sys 0m0.712s 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.985 09:40:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.985 ************************************ 00:07:07.985 END TEST default_locks 00:07:07.985 ************************************ 00:07:07.985 09:40:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:07.985 09:40:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.985 09:40:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.985 09:40:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.985 ************************************ 00:07:07.985 START TEST default_locks_via_rpc 00:07:07.985 ************************************ 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59253 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59253 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59253 ']' 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.985 09:40:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.985 [2024-10-11 09:40:52.306251] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:07.985 [2024-10-11 09:40:52.306421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:07:07.985 [2024-10-11 09:40:52.478939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.244 [2024-10-11 09:40:52.655911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59253 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59253 00:07:09.623 09:40:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59253 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59253 ']' 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59253 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59253 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.623 killing process with pid 59253 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59253' 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59253 00:07:09.623 09:40:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59253 00:07:12.911 00:07:12.911 real 0m5.088s 00:07:12.911 user 0m4.822s 00:07:12.911 sys 0m0.850s 00:07:12.911 09:40:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.911 09:40:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 ************************************ 00:07:12.911 END TEST default_locks_via_rpc 00:07:12.911 ************************************ 00:07:12.911 09:40:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:12.911 09:40:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.911 09:40:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.911 09:40:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 ************************************ 00:07:12.911 START TEST non_locking_app_on_locked_coremask 00:07:12.911 ************************************ 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59340 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59340 /var/tmp/spdk.sock 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59340 ']' 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.911 09:40:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 [2024-10-11 09:40:57.443895] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:12.911 [2024-10-11 09:40:57.444452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:07:13.170 [2024-10-11 09:40:57.611409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.170 [2024-10-11 09:40:57.768942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59356 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59356 /var/tmp/spdk2.sock 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59356 ']' 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.546 09:40:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.546 [2024-10-11 09:40:59.025370] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:14.546 [2024-10-11 09:40:59.025514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59356 ] 00:07:14.805 [2024-10-11 09:40:59.179391] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.805 [2024-10-11 09:40:59.179478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.064 [2024-10-11 09:40:59.483747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.603 09:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.603 09:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.603 09:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59340 00:07:17.603 09:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59340 00:07:17.603 09:41:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59340 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59340 ']' 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59340 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59340 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.172 killing process with pid 59340 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59340' 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59340 00:07:18.172 09:41:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59340 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59356 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59356 ']' 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59356 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59356 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.744 killing process with pid 59356 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59356' 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59356 00:07:24.744 09:41:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59356 00:07:27.278 00:07:27.278 real 0m14.048s 00:07:27.278 user 0m14.056s 00:07:27.278 sys 0m1.819s 00:07:27.278 09:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.278 09:41:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.278 ************************************ 00:07:27.278 END TEST non_locking_app_on_locked_coremask 00:07:27.278 ************************************ 00:07:27.278 09:41:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:27.278 09:41:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.278 09:41:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.278 09:41:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.278 ************************************ 00:07:27.278 START TEST locking_app_on_unlocked_coremask 00:07:27.278 ************************************ 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59532 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59532 /var/tmp/spdk.sock 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59532 ']' 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.278 09:41:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.278 [2024-10-11 09:41:11.552008] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:27.279 [2024-10-11 09:41:11.552161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:07:27.279 [2024-10-11 09:41:11.700436] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.279 [2024-10-11 09:41:11.700504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.279 [2024-10-11 09:41:11.855675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59548 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59548 /var/tmp/spdk2.sock 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59548 ']' 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.654 09:41:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.654 [2024-10-11 09:41:13.094942] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:28.654 [2024-10-11 09:41:13.095086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59548 ] 00:07:28.654 [2024-10-11 09:41:13.261379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.220 [2024-10-11 09:41:13.584436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.749 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.749 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:31.749 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59548 00:07:31.749 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59548 00:07:31.749 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59532 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59532 ']' 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59532 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59532 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59532' 00:07:32.007 killing process with pid 59532 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59532 00:07:32.007 09:41:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59532 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59548 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59548 ']' 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59548 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59548 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.577 killing process with pid 59548 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59548' 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59548 00:07:38.577 09:41:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59548 00:07:40.483 00:07:40.483 real 0m13.506s 00:07:40.483 user 0m13.519s 00:07:40.483 sys 0m1.559s 00:07:40.483 09:41:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.483 09:41:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.483 ************************************ 00:07:40.483 END TEST locking_app_on_unlocked_coremask 00:07:40.483 ************************************ 00:07:40.483 09:41:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:40.483 09:41:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.483 09:41:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.483 09:41:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.483 ************************************ 00:07:40.483 START TEST locking_app_on_locked_coremask 00:07:40.483 ************************************ 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59718 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59718 /var/tmp/spdk.sock 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59718 ']' 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.483 09:41:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.742 [2024-10-11 09:41:25.140655] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:40.742 [2024-10-11 09:41:25.140809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:07:40.742 [2024-10-11 09:41:25.311173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.001 [2024-10-11 09:41:25.467802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59734 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59734 /var/tmp/spdk2.sock 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59734 /var/tmp/spdk2.sock 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59734 /var/tmp/spdk2.sock 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59734 ']' 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.378 09:41:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.378 [2024-10-11 09:41:26.697289] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:42.378 [2024-10-11 09:41:26.697447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59734 ] 00:07:42.378 [2024-10-11 09:41:26.855530] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59718 has claimed it. 00:07:42.378 [2024-10-11 09:41:26.855598] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59734) - No such process 00:07:42.948 ERROR: process (pid: 59734) is no longer running 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59718 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.948 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59718 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59718 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59718 ']' 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59718 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59718 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.208 killing process with pid 59718 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59718' 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59718 00:07:43.208 09:41:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59718 00:07:46.502 00:07:46.502 real 0m5.442s 00:07:46.502 user 0m5.463s 00:07:46.502 sys 0m0.961s 00:07:46.502 09:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.502 09:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.502 ************************************ 00:07:46.502 END TEST locking_app_on_locked_coremask 00:07:46.502 ************************************ 00:07:46.502 09:41:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:46.502 09:41:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.502 09:41:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.502 09:41:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.502 ************************************ 00:07:46.502 START TEST locking_overlapped_coremask 00:07:46.502 ************************************ 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59809 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59809 /var/tmp/spdk.sock 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59809 ']' 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.502 09:41:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.502 [2024-10-11 09:41:30.655807] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:46.502 [2024-10-11 09:41:30.656499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:07:46.502 [2024-10-11 09:41:30.819555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.502 [2024-10-11 09:41:30.975081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.502 [2024-10-11 09:41:30.975276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.502 [2024-10-11 09:41:30.975332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59833 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59833 /var/tmp/spdk2.sock 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59833 /var/tmp/spdk2.sock 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59833 /var/tmp/spdk2.sock 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59833 ']' 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.883 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.883 [2024-10-11 09:41:32.216090] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:47.883 [2024-10-11 09:41:32.216236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:07:47.883 [2024-10-11 09:41:32.386475] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59809 has claimed it. 00:07:47.883 [2024-10-11 09:41:32.390784] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:48.453 ERROR: process (pid: 59833) is no longer running 00:07:48.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59833) - No such process 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59809 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59809 ']' 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59809 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59809 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.453 killing process with pid 59809 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59809' 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59809 00:07:48.453 09:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59809 00:07:50.988 00:07:50.988 real 0m4.870s 00:07:50.988 user 0m13.084s 00:07:50.988 sys 0m0.798s 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.988 ************************************ 00:07:50.988 END TEST locking_overlapped_coremask 00:07:50.988 ************************************ 00:07:50.988 09:41:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:50.988 09:41:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.988 09:41:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.988 09:41:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.988 ************************************ 00:07:50.988 START TEST locking_overlapped_coremask_via_rpc 00:07:50.988 ************************************ 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:50.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59902 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59902 /var/tmp/spdk.sock 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59902 ']' 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.988 09:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:50.988 [2024-10-11 09:41:35.568230] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:50.988 [2024-10-11 09:41:35.568351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:07:51.248 [2024-10-11 09:41:35.733229] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.248 [2024-10-11 09:41:35.733288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.248 [2024-10-11 09:41:35.860796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.248 [2024-10-11 09:41:35.860838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.248 [2024-10-11 09:41:35.860864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59920 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59920 /var/tmp/spdk2.sock 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59920 ']' 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.208 09:41:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.467 [2024-10-11 09:41:36.901368] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:52.467 [2024-10-11 09:41:36.901500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:07:52.467 [2024-10-11 09:41:37.064460] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.467 [2024-10-11 09:41:37.064557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.033 [2024-10-11 09:41:37.397571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.033 [2024-10-11 09:41:37.397762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.033 [2024-10-11 09:41:37.398544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.565 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.565 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.565 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:55.565 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.565 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.565 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.566 [2024-10-11 09:41:39.664037] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59902 has claimed it. 00:07:55.566 request: 00:07:55.566 { 00:07:55.566 "method": "framework_enable_cpumask_locks", 00:07:55.566 "req_id": 1 00:07:55.566 } 00:07:55.566 Got JSON-RPC error response 00:07:55.566 response: 00:07:55.566 { 00:07:55.566 "code": -32603, 00:07:55.566 "message": "Failed to claim CPU core: 2" 00:07:55.566 } 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59902 /var/tmp/spdk.sock 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59902 ']' 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59920 /var/tmp/spdk2.sock 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59920 ']' 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.566 09:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:55.566 00:07:55.566 real 0m4.649s 00:07:55.566 user 0m1.345s 00:07:55.566 sys 0m0.193s 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.566 09:41:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.566 ************************************ 00:07:55.566 END TEST locking_overlapped_coremask_via_rpc 00:07:55.566 ************************************ 00:07:55.566 09:41:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:55.566 09:41:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59902 ]] 00:07:55.566 09:41:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59902 00:07:55.566 09:41:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59902 ']' 00:07:55.566 09:41:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59902 00:07:55.566 09:41:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:55.566 09:41:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.566 09:41:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59902 00:07:55.825 09:41:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.825 09:41:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.825 killing process with pid 59902 00:07:55.825 09:41:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59902' 00:07:55.825 09:41:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59902 00:07:55.825 09:41:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59902 00:07:58.358 09:41:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59920 ]] 00:07:58.358 09:41:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59920 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59920 ']' 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59920 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59920 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59920' 00:07:58.358 killing process with pid 59920 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59920 00:07:58.358 09:41:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59920 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59902 ]] 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59902 00:08:01.647 Process with pid 59902 is not found 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59902 ']' 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59902 00:08:01.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59902) - No such process 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59902 is not found' 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59920 ]] 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59920 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59920 ']' 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59920 00:08:01.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59920) - No such process 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59920 is not found' 00:08:01.647 Process with pid 59920 is not found 00:08:01.647 09:41:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:01.647 00:08:01.647 real 0m58.627s 00:08:01.647 user 1m35.544s 00:08:01.647 sys 0m8.354s 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.647 09:41:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.647 ************************************ 00:08:01.647 END TEST cpu_locks 00:08:01.647 ************************************ 00:08:01.647 00:08:01.648 real 1m33.046s 00:08:01.648 user 2m47.021s 00:08:01.648 sys 0m12.575s 00:08:01.648 09:41:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.648 09:41:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.648 ************************************ 00:08:01.648 END TEST event 00:08:01.648 ************************************ 00:08:01.648 09:41:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:01.648 09:41:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.648 09:41:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.648 09:41:45 -- common/autotest_common.sh@10 -- # set +x 00:08:01.648 ************************************ 00:08:01.648 START TEST thread 00:08:01.648 ************************************ 00:08:01.648 09:41:45 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:01.648 * Looking for test storage... 00:08:01.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:01.648 09:41:45 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.648 09:41:45 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.648 09:41:45 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.648 09:41:45 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.648 09:41:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.648 09:41:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.648 09:41:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.648 09:41:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.648 09:41:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.648 09:41:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.648 09:41:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.648 09:41:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.648 09:41:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.648 09:41:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.648 09:41:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.648 09:41:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:01.648 09:41:45 thread -- scripts/common.sh@345 -- # : 1 00:08:01.648 09:41:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.648 09:41:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.648 09:41:45 thread -- scripts/common.sh@365 -- # decimal 1 00:08:01.648 09:41:45 thread -- scripts/common.sh@353 -- # local d=1 00:08:01.648 09:41:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.648 09:41:45 thread -- scripts/common.sh@355 -- # echo 1 00:08:01.648 09:41:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.648 09:41:46 thread -- scripts/common.sh@366 -- # decimal 2 00:08:01.648 09:41:46 thread -- scripts/common.sh@353 -- # local d=2 00:08:01.648 09:41:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.648 09:41:46 thread -- scripts/common.sh@355 -- # echo 2 00:08:01.648 09:41:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.648 09:41:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.648 09:41:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.648 09:41:46 thread -- scripts/common.sh@368 -- # return 0 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.648 --rc genhtml_branch_coverage=1 00:08:01.648 --rc genhtml_function_coverage=1 00:08:01.648 --rc genhtml_legend=1 00:08:01.648 --rc geninfo_all_blocks=1 00:08:01.648 --rc geninfo_unexecuted_blocks=1 00:08:01.648 00:08:01.648 ' 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.648 --rc genhtml_branch_coverage=1 00:08:01.648 --rc genhtml_function_coverage=1 00:08:01.648 --rc genhtml_legend=1 00:08:01.648 --rc geninfo_all_blocks=1 00:08:01.648 --rc geninfo_unexecuted_blocks=1 00:08:01.648 00:08:01.648 ' 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.648 --rc genhtml_branch_coverage=1 00:08:01.648 --rc genhtml_function_coverage=1 00:08:01.648 --rc genhtml_legend=1 00:08:01.648 --rc geninfo_all_blocks=1 00:08:01.648 --rc geninfo_unexecuted_blocks=1 00:08:01.648 00:08:01.648 ' 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.648 --rc genhtml_branch_coverage=1 00:08:01.648 --rc genhtml_function_coverage=1 00:08:01.648 --rc genhtml_legend=1 00:08:01.648 --rc geninfo_all_blocks=1 00:08:01.648 --rc geninfo_unexecuted_blocks=1 00:08:01.648 00:08:01.648 ' 00:08:01.648 09:41:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.648 09:41:46 thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.648 ************************************ 00:08:01.648 START TEST thread_poller_perf 00:08:01.648 ************************************ 00:08:01.648 09:41:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:01.648 [2024-10-11 09:41:46.078795] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:01.648 [2024-10-11 09:41:46.078929] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:08:01.648 [2024-10-11 09:41:46.251502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.907 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:01.907 [2024-10-11 09:41:46.375729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.284 [2024-10-11T09:41:47.916Z] ====================================== 00:08:03.284 [2024-10-11T09:41:47.916Z] busy:2300195990 (cyc) 00:08:03.284 [2024-10-11T09:41:47.916Z] total_run_count: 365000 00:08:03.284 [2024-10-11T09:41:47.916Z] tsc_hz: 2290000000 (cyc) 00:08:03.284 [2024-10-11T09:41:47.917Z] ====================================== 00:08:03.285 [2024-10-11T09:41:47.917Z] poller_cost: 6301 (cyc), 2751 (nsec) 00:08:03.285 ************************************ 00:08:03.285 END TEST thread_poller_perf 00:08:03.285 ************************************ 00:08:03.285 00:08:03.285 real 0m1.602s 00:08:03.285 user 0m1.388s 00:08:03.285 sys 0m0.107s 00:08:03.285 09:41:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.285 09:41:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.285 09:41:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:03.285 09:41:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:03.285 09:41:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.285 09:41:47 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.285 ************************************ 00:08:03.285 START TEST thread_poller_perf 00:08:03.285 ************************************ 00:08:03.285 09:41:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:03.285 [2024-10-11 09:41:47.739136] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:03.285 [2024-10-11 09:41:47.739244] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:08:03.285 [2024-10-11 09:41:47.901418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.544 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:03.544 [2024-10-11 09:41:48.026589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.925 [2024-10-11T09:41:49.557Z] ====================================== 00:08:04.925 [2024-10-11T09:41:49.557Z] busy:2293767260 (cyc) 00:08:04.925 [2024-10-11T09:41:49.557Z] total_run_count: 4769000 00:08:04.925 [2024-10-11T09:41:49.557Z] tsc_hz: 2290000000 (cyc) 00:08:04.925 [2024-10-11T09:41:49.557Z] ====================================== 00:08:04.925 [2024-10-11T09:41:49.557Z] poller_cost: 480 (cyc), 209 (nsec) 00:08:04.925 ************************************ 00:08:04.925 END TEST thread_poller_perf 00:08:04.925 ************************************ 00:08:04.925 00:08:04.925 real 0m1.585s 00:08:04.925 user 0m1.378s 00:08:04.925 sys 0m0.099s 00:08:04.925 09:41:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.925 09:41:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.925 09:41:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:04.925 ************************************ 00:08:04.925 END TEST thread 00:08:04.925 ************************************ 00:08:04.925 00:08:04.925 real 0m3.535s 00:08:04.925 user 0m2.932s 00:08:04.925 sys 0m0.400s 00:08:04.925 09:41:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.925 09:41:49 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.925 09:41:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:04.925 09:41:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:04.925 09:41:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.925 09:41:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.925 09:41:49 -- common/autotest_common.sh@10 -- # set +x 00:08:04.925 ************************************ 00:08:04.925 START TEST app_cmdline 00:08:04.925 ************************************ 00:08:04.925 09:41:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:04.925 * Looking for test storage... 00:08:04.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:04.925 09:41:49 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:04.925 09:41:49 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:04.925 09:41:49 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.185 09:41:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.185 --rc genhtml_branch_coverage=1 00:08:05.185 --rc genhtml_function_coverage=1 00:08:05.185 --rc genhtml_legend=1 00:08:05.185 --rc geninfo_all_blocks=1 00:08:05.185 --rc geninfo_unexecuted_blocks=1 00:08:05.185 00:08:05.185 ' 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.185 --rc genhtml_branch_coverage=1 00:08:05.185 --rc genhtml_function_coverage=1 00:08:05.185 --rc genhtml_legend=1 00:08:05.185 --rc geninfo_all_blocks=1 00:08:05.185 --rc geninfo_unexecuted_blocks=1 00:08:05.185 00:08:05.185 ' 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.185 --rc genhtml_branch_coverage=1 00:08:05.185 --rc genhtml_function_coverage=1 00:08:05.185 --rc genhtml_legend=1 00:08:05.185 --rc geninfo_all_blocks=1 00:08:05.185 --rc geninfo_unexecuted_blocks=1 00:08:05.185 00:08:05.185 ' 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.185 --rc genhtml_branch_coverage=1 00:08:05.185 --rc genhtml_function_coverage=1 00:08:05.185 --rc genhtml_legend=1 00:08:05.185 --rc geninfo_all_blocks=1 00:08:05.185 --rc geninfo_unexecuted_blocks=1 00:08:05.185 00:08:05.185 ' 00:08:05.185 09:41:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:05.185 09:41:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60253 00:08:05.185 09:41:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:05.185 09:41:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60253 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60253 ']' 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.185 09:41:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.186 09:41:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.186 09:41:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.186 09:41:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:05.186 [2024-10-11 09:41:49.700338] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:05.186 [2024-10-11 09:41:49.700570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60253 ] 00:08:05.445 [2024-10-11 09:41:49.866216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.445 [2024-10-11 09:41:50.007075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.385 09:41:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.385 09:41:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:06.385 09:41:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:06.644 { 00:08:06.644 "version": "SPDK v25.01-pre git sha1 5031f0f3b", 00:08:06.644 "fields": { 00:08:06.644 "major": 25, 00:08:06.644 "minor": 1, 00:08:06.644 "patch": 0, 00:08:06.644 "suffix": "-pre", 00:08:06.644 "commit": "5031f0f3b" 00:08:06.644 } 00:08:06.644 } 00:08:06.644 09:41:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:06.644 09:41:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:06.644 09:41:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:06.644 09:41:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:06.644 09:41:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:06.644 09:41:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.645 09:41:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.645 09:41:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:06.645 09:41:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:06.645 09:41:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.903 09:41:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:06.903 09:41:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:06.903 09:41:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:06.903 09:41:51 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:07.161 request: 00:08:07.161 { 00:08:07.161 "method": "env_dpdk_get_mem_stats", 00:08:07.161 "req_id": 1 00:08:07.161 } 00:08:07.161 Got JSON-RPC error response 00:08:07.161 response: 00:08:07.161 { 00:08:07.161 "code": -32601, 00:08:07.161 "message": "Method not found" 00:08:07.161 } 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.161 09:41:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60253 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60253 ']' 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60253 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60253 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.161 killing process with pid 60253 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60253' 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 60253 00:08:07.161 09:41:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 60253 00:08:09.687 ************************************ 00:08:09.687 00:08:09.687 real 0m4.713s 00:08:09.687 user 0m5.056s 00:08:09.687 sys 0m0.660s 00:08:09.687 09:41:54 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.687 09:41:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.687 END TEST app_cmdline 00:08:09.687 ************************************ 00:08:09.687 09:41:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:09.687 09:41:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.687 09:41:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.687 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:08:09.687 ************************************ 00:08:09.687 START TEST version 00:08:09.687 ************************************ 00:08:09.687 09:41:54 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:09.687 * Looking for test storage... 00:08:09.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:09.687 09:41:54 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.687 09:41:54 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.687 09:41:54 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.946 09:41:54 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.946 09:41:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.946 09:41:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.946 09:41:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.946 09:41:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.946 09:41:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.946 09:41:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.946 09:41:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.946 09:41:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.946 09:41:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.946 09:41:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.946 09:41:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.946 09:41:54 version -- scripts/common.sh@344 -- # case "$op" in 00:08:09.946 09:41:54 version -- scripts/common.sh@345 -- # : 1 00:08:09.946 09:41:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.946 09:41:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.946 09:41:54 version -- scripts/common.sh@365 -- # decimal 1 00:08:09.946 09:41:54 version -- scripts/common.sh@353 -- # local d=1 00:08:09.946 09:41:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.946 09:41:54 version -- scripts/common.sh@355 -- # echo 1 00:08:09.946 09:41:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.946 09:41:54 version -- scripts/common.sh@366 -- # decimal 2 00:08:09.946 09:41:54 version -- scripts/common.sh@353 -- # local d=2 00:08:09.946 09:41:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.946 09:41:54 version -- scripts/common.sh@355 -- # echo 2 00:08:09.946 09:41:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.946 09:41:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.946 09:41:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.946 09:41:54 version -- scripts/common.sh@368 -- # return 0 00:08:09.946 09:41:54 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.946 09:41:54 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.946 --rc genhtml_branch_coverage=1 00:08:09.947 --rc genhtml_function_coverage=1 00:08:09.947 --rc genhtml_legend=1 00:08:09.947 --rc geninfo_all_blocks=1 00:08:09.947 --rc geninfo_unexecuted_blocks=1 00:08:09.947 00:08:09.947 ' 00:08:09.947 09:41:54 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.947 --rc genhtml_branch_coverage=1 00:08:09.947 --rc genhtml_function_coverage=1 00:08:09.947 --rc genhtml_legend=1 00:08:09.947 --rc geninfo_all_blocks=1 00:08:09.947 --rc geninfo_unexecuted_blocks=1 00:08:09.947 00:08:09.947 ' 00:08:09.947 09:41:54 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.947 --rc genhtml_branch_coverage=1 00:08:09.947 --rc genhtml_function_coverage=1 00:08:09.947 --rc genhtml_legend=1 00:08:09.947 --rc geninfo_all_blocks=1 00:08:09.947 --rc geninfo_unexecuted_blocks=1 00:08:09.947 00:08:09.947 ' 00:08:09.947 09:41:54 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.947 --rc genhtml_branch_coverage=1 00:08:09.947 --rc genhtml_function_coverage=1 00:08:09.947 --rc genhtml_legend=1 00:08:09.947 --rc geninfo_all_blocks=1 00:08:09.947 --rc geninfo_unexecuted_blocks=1 00:08:09.947 00:08:09.947 ' 00:08:09.947 09:41:54 version -- app/version.sh@17 -- # get_header_version major 00:08:09.947 09:41:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # cut -f2 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.947 09:41:54 version -- app/version.sh@17 -- # major=25 00:08:09.947 09:41:54 version -- app/version.sh@18 -- # get_header_version minor 00:08:09.947 09:41:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # cut -f2 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.947 09:41:54 version -- app/version.sh@18 -- # minor=1 00:08:09.947 09:41:54 version -- app/version.sh@19 -- # get_header_version patch 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # cut -f2 00:08:09.947 09:41:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.947 09:41:54 version -- app/version.sh@19 -- # patch=0 00:08:09.947 09:41:54 version -- app/version.sh@20 -- # get_header_version suffix 00:08:09.947 09:41:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # cut -f2 00:08:09.947 09:41:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.947 09:41:54 version -- app/version.sh@20 -- # suffix=-pre 00:08:09.947 09:41:54 version -- app/version.sh@22 -- # version=25.1 00:08:09.947 09:41:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:09.947 09:41:54 version -- app/version.sh@28 -- # version=25.1rc0 00:08:09.947 09:41:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:09.947 09:41:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:09.947 09:41:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:09.947 09:41:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:09.947 00:08:09.947 real 0m0.326s 00:08:09.947 user 0m0.203s 00:08:09.947 sys 0m0.178s 00:08:09.947 09:41:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.947 ************************************ 00:08:09.947 END TEST version 00:08:09.947 ************************************ 00:08:09.947 09:41:54 version -- common/autotest_common.sh@10 -- # set +x 00:08:09.947 09:41:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:09.947 09:41:54 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:09.947 09:41:54 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:09.947 09:41:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.947 09:41:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.947 09:41:54 -- common/autotest_common.sh@10 -- # set +x 00:08:09.947 ************************************ 00:08:09.947 START TEST bdev_raid 00:08:09.947 ************************************ 00:08:09.947 09:41:54 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:10.206 * Looking for test storage... 00:08:10.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:10.206 09:41:54 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:10.206 09:41:54 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:08:10.206 09:41:54 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:10.206 09:41:54 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.206 09:41:54 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:10.206 09:41:54 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.206 09:41:54 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:10.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.206 --rc genhtml_branch_coverage=1 00:08:10.206 --rc genhtml_function_coverage=1 00:08:10.206 --rc genhtml_legend=1 00:08:10.207 --rc geninfo_all_blocks=1 00:08:10.207 --rc geninfo_unexecuted_blocks=1 00:08:10.207 00:08:10.207 ' 00:08:10.207 09:41:54 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:10.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.207 --rc genhtml_branch_coverage=1 00:08:10.207 --rc genhtml_function_coverage=1 00:08:10.207 --rc genhtml_legend=1 00:08:10.207 --rc geninfo_all_blocks=1 00:08:10.207 --rc geninfo_unexecuted_blocks=1 00:08:10.207 00:08:10.207 ' 00:08:10.207 09:41:54 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:10.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.207 --rc genhtml_branch_coverage=1 00:08:10.207 --rc genhtml_function_coverage=1 00:08:10.207 --rc genhtml_legend=1 00:08:10.207 --rc geninfo_all_blocks=1 00:08:10.207 --rc geninfo_unexecuted_blocks=1 00:08:10.207 00:08:10.207 ' 00:08:10.207 09:41:54 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:10.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.207 --rc genhtml_branch_coverage=1 00:08:10.207 --rc genhtml_function_coverage=1 00:08:10.207 --rc genhtml_legend=1 00:08:10.207 --rc geninfo_all_blocks=1 00:08:10.207 --rc geninfo_unexecuted_blocks=1 00:08:10.207 00:08:10.207 ' 00:08:10.207 09:41:54 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:10.207 09:41:54 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:10.207 09:41:54 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:10.207 09:41:54 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:10.207 09:41:54 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:10.207 09:41:54 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:10.207 09:41:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:10.207 09:41:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.207 09:41:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.207 09:41:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.207 ************************************ 00:08:10.207 START TEST raid1_resize_data_offset_test 00:08:10.207 ************************************ 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60446 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60446' 00:08:10.207 Process raid pid: 60446 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60446 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60446 ']' 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.207 09:41:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.466 [2024-10-11 09:41:54.923704] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:10.466 [2024-10-11 09:41:54.923958] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.466 [2024-10-11 09:41:55.093333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.726 [2024-10-11 09:41:55.240694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.984 [2024-10-11 09:41:55.510064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.984 [2024-10-11 09:41:55.510227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.243 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.243 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:08:11.243 09:41:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:11.243 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.243 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.501 malloc0 00:08:11.501 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.501 09:41:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:11.501 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.501 09:41:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.501 malloc1 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.501 null0 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.501 [2024-10-11 09:41:56.040515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:11.501 [2024-10-11 09:41:56.042676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:11.501 [2024-10-11 09:41:56.042755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:11.501 [2024-10-11 09:41:56.042939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:11.501 [2024-10-11 09:41:56.042954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:11.501 [2024-10-11 09:41:56.043302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:11.501 [2024-10-11 09:41:56.043524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:11.501 [2024-10-11 09:41:56.043540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:11.501 [2024-10-11 09:41:56.043841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.501 [2024-10-11 09:41:56.100423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.501 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.441 malloc2 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.441 [2024-10-11 09:41:56.785337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:12.441 [2024-10-11 09:41:56.808289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.441 [2024-10-11 09:41:56.810394] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60446 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60446 ']' 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60446 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60446 00:08:12.441 killing process with pid 60446 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60446' 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60446 00:08:12.441 09:41:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60446 00:08:12.441 [2024-10-11 09:41:56.902676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.441 [2024-10-11 09:41:56.903937] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:12.441 [2024-10-11 09:41:56.904006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.441 [2024-10-11 09:41:56.904025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:12.441 [2024-10-11 09:41:56.945114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.441 [2024-10-11 09:41:56.945486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.441 [2024-10-11 09:41:56.945507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:14.347 [2024-10-11 09:41:58.851277] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.726 09:41:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:15.726 00:08:15.726 real 0m5.145s 00:08:15.726 user 0m5.115s 00:08:15.726 sys 0m0.552s 00:08:15.726 09:41:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.726 09:41:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.726 ************************************ 00:08:15.726 END TEST raid1_resize_data_offset_test 00:08:15.726 ************************************ 00:08:15.726 09:42:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:15.726 09:42:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.726 09:42:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.726 09:42:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.726 ************************************ 00:08:15.726 START TEST raid0_resize_superblock_test 00:08:15.726 ************************************ 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60535 00:08:15.726 Process raid pid: 60535 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60535' 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60535 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60535 ']' 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.726 09:42:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.726 [2024-10-11 09:42:00.117713] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:15.726 [2024-10-11 09:42:00.118279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.726 [2024-10-11 09:42:00.282832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.985 [2024-10-11 09:42:00.418413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.244 [2024-10-11 09:42:00.666416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.244 [2024-10-11 09:42:00.666469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.503 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.503 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:16.503 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:16.503 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.503 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 malloc0 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 [2024-10-11 09:42:01.649484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:17.113 [2024-10-11 09:42:01.649572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.113 [2024-10-11 09:42:01.649611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:17.113 [2024-10-11 09:42:01.649627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.113 [2024-10-11 09:42:01.652263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.113 [2024-10-11 09:42:01.652310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:17.113 pt0 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 c4082ef2-adac-4537-b29b-ce27b62e40ab 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 ba0a1795-0e3f-4590-a200-198b4de3ef5a 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 3da5212f-3ec2-4564-81c4-62a87ab6aeff 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 [2024-10-11 09:42:01.776284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ba0a1795-0e3f-4590-a200-198b4de3ef5a is claimed 00:08:17.371 [2024-10-11 09:42:01.776394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3da5212f-3ec2-4564-81c4-62a87ab6aeff is claimed 00:08:17.371 [2024-10-11 09:42:01.776558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:17.371 [2024-10-11 09:42:01.776576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:17.371 [2024-10-11 09:42:01.776895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.371 [2024-10-11 09:42:01.777136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:17.371 [2024-10-11 09:42:01.777214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:17.371 [2024-10-11 09:42:01.777448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:17.371 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.372 [2024-10-11 09:42:01.872432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.372 [2024-10-11 09:42:01.916399] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:17.372 [2024-10-11 09:42:01.916510] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ba0a1795-0e3f-4590-a200-198b4de3ef5a' was resized: old size 131072, new size 204800 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.372 [2024-10-11 09:42:01.928404] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:17.372 [2024-10-11 09:42:01.928443] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3da5212f-3ec2-4564-81c4-62a87ab6aeff' was resized: old size 131072, new size 204800 00:08:17.372 [2024-10-11 09:42:01.928485] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:17.372 09:42:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 [2024-10-11 09:42:02.032333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 [2024-10-11 09:42:02.067989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:17.630 [2024-10-11 09:42:02.068073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:17.630 [2024-10-11 09:42:02.068088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.630 [2024-10-11 09:42:02.068101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:17.630 [2024-10-11 09:42:02.068220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.630 [2024-10-11 09:42:02.068256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.630 [2024-10-11 09:42:02.068269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 [2024-10-11 09:42:02.075880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:17.630 [2024-10-11 09:42:02.075981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.630 [2024-10-11 09:42:02.076014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:17.630 [2024-10-11 09:42:02.076029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.630 [2024-10-11 09:42:02.078897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.630 [2024-10-11 09:42:02.078948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:17.630 pt0 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.630 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 [2024-10-11 09:42:02.081216] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ba0a1795-0e3f-4590-a200-198b4de3ef5a 00:08:17.630 [2024-10-11 09:42:02.081301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ba0a1795-0e3f-4590-a200-198b4de3ef5a is claimed 00:08:17.630 [2024-10-11 09:42:02.081448] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3da5212f-3ec2-4564-81c4-62a87ab6aeff 00:08:17.630 [2024-10-11 09:42:02.081485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3da5212f-3ec2-4564-81c4-62a87ab6aeff is claimed 00:08:17.630 [2024-10-11 09:42:02.081638] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3da5212f-3ec2-4564-81c4-62a87ab6aeff (2) smaller than existing raid bdev Raid (3) 00:08:17.630 [2024-10-11 09:42:02.081662] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ba0a1795-0e3f-4590-a200-198b4de3ef5a: File exists 00:08:17.630 [2024-10-11 09:42:02.081717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:17.630 [2024-10-11 09:42:02.081731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:17.630 [2024-10-11 09:42:02.082041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:17.630 [2024-10-11 09:42:02.082315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:17.630 [2024-10-11 09:42:02.082334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:17.630 [2024-10-11 09:42:02.082537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.631 [2024-10-11 09:42:02.096234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60535 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60535 ']' 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60535 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60535 00:08:17.631 killing process with pid 60535 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60535' 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60535 00:08:17.631 [2024-10-11 09:42:02.166909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.631 09:42:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60535 00:08:17.631 [2024-10-11 09:42:02.167025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.631 [2024-10-11 09:42:02.167094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.631 [2024-10-11 09:42:02.167106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:19.530 [2024-10-11 09:42:03.651720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.467 ************************************ 00:08:20.467 END TEST raid0_resize_superblock_test 00:08:20.467 ************************************ 00:08:20.467 09:42:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:20.467 00:08:20.467 real 0m4.826s 00:08:20.467 user 0m5.036s 00:08:20.467 sys 0m0.566s 00:08:20.467 09:42:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.467 09:42:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.467 09:42:04 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:20.467 09:42:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.467 09:42:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.467 09:42:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.467 ************************************ 00:08:20.467 START TEST raid1_resize_superblock_test 00:08:20.467 ************************************ 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:20.467 Process raid pid: 60634 00:08:20.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60634 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60634' 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60634 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60634 ']' 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.467 09:42:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 [2024-10-11 09:42:05.001713] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:20.468 [2024-10-11 09:42:05.002021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.727 [2024-10-11 09:42:05.191326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.727 [2024-10-11 09:42:05.320369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.986 [2024-10-11 09:42:05.541575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.986 [2024-10-11 09:42:05.541713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.555 09:42:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.555 09:42:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.555 09:42:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:21.555 09:42:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.555 09:42:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 malloc0 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 [2024-10-11 09:42:06.521799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:22.124 [2024-10-11 09:42:06.521873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.124 [2024-10-11 09:42:06.521900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:22.124 [2024-10-11 09:42:06.521912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.124 [2024-10-11 09:42:06.524139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.124 [2024-10-11 09:42:06.524182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:22.124 pt0 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 b0eb0382-7b7d-4a37-b814-058bb5968ac0 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 1fa91f72-2269-4f9e-84c5-17219d1dcf07 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 4f8b07d6-bd90-457e-989f-118fc87a6e50 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 [2024-10-11 09:42:06.657351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1fa91f72-2269-4f9e-84c5-17219d1dcf07 is claimed 00:08:22.124 [2024-10-11 09:42:06.657473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f8b07d6-bd90-457e-989f-118fc87a6e50 is claimed 00:08:22.124 [2024-10-11 09:42:06.657635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:22.124 [2024-10-11 09:42:06.657653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:22.124 [2024-10-11 09:42:06.657966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.124 [2024-10-11 09:42:06.658188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:22.124 [2024-10-11 09:42:06.658207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:22.124 [2024-10-11 09:42:06.658389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:22.124 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.124 [2024-10-11 09:42:06.753505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.383 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.383 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.383 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.383 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:22.383 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:22.383 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-10-11 09:42:06.797284] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:22.384 [2024-10-11 09:42:06.797364] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1fa91f72-2269-4f9e-84c5-17219d1dcf07' was resized: old size 131072, new size 204800 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-10-11 09:42:06.805207] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:22.384 [2024-10-11 09:42:06.805232] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4f8b07d6-bd90-457e-989f-118fc87a6e50' was resized: old size 131072, new size 204800 00:08:22.384 [2024-10-11 09:42:06.805301] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-10-11 09:42:06.917200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-10-11 09:42:06.960893] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:22.384 [2024-10-11 09:42:06.961041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:22.384 [2024-10-11 09:42:06.961100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:22.384 [2024-10-11 09:42:06.961310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.384 [2024-10-11 09:42:06.961590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.384 [2024-10-11 09:42:06.961712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.384 [2024-10-11 09:42:06.961797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 [2024-10-11 09:42:06.972728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:22.384 [2024-10-11 09:42:06.972833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.384 [2024-10-11 09:42:06.972860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:22.384 [2024-10-11 09:42:06.972873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.384 [2024-10-11 09:42:06.975403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.384 [2024-10-11 09:42:06.975445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:22.384 [2024-10-11 09:42:06.977409] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1fa91f72-2269-4f9e-84c5-17219d1dcf07 00:08:22.384 [2024-10-11 09:42:06.977492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1fa91f72-2269-4f9e-84c5-17219d1dcf07 is claimed 00:08:22.384 [2024-10-11 09:42:06.977623] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4f8b07d6-bd90-457e-989f-118fc87a6e50 00:08:22.384 [2024-10-11 09:42:06.977643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f8b07d6-bd90-457e-989f-118fc87a6e50 is claimed 00:08:22.384 [2024-10-11 09:42:06.977833] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4f8b07d6-bd90-457e-989f-118fc87a6e50 (2) smaller than existing raid bdev Raid (3) 00:08:22.384 [2024-10-11 09:42:06.977859] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 1fa91f72-2269-4f9e-84c5-17219d1dcf07: File exists 00:08:22.384 [2024-10-11 09:42:06.977898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:22.384 [2024-10-11 09:42:06.977910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:22.384 pt0 00:08:22.384 [2024-10-11 09:42:06.978172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:22.384 [2024-10-11 09:42:06.978359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:22.384 [2024-10-11 09:42:06.978369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:22.384 [2024-10-11 09:42:06.978526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.384 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:22.384 [2024-10-11 09:42:06.997803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.384 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.644 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.644 09:42:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60634 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60634 ']' 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60634 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60634 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60634' 00:08:22.644 killing process with pid 60634 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60634 00:08:22.644 [2024-10-11 09:42:07.056719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.644 [2024-10-11 09:42:07.056834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.644 [2024-10-11 09:42:07.056903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.644 09:42:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60634 00:08:22.644 [2024-10-11 09:42:07.056915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:24.022 [2024-10-11 09:42:08.513136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.405 09:42:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:25.405 00:08:25.405 real 0m4.790s 00:08:25.405 user 0m5.011s 00:08:25.405 sys 0m0.583s 00:08:25.405 09:42:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.405 09:42:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.405 ************************************ 00:08:25.405 END TEST raid1_resize_superblock_test 00:08:25.405 ************************************ 00:08:25.405 09:42:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:25.405 09:42:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:25.405 09:42:09 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:25.405 09:42:09 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:25.405 09:42:09 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:25.405 09:42:09 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:25.405 09:42:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.405 09:42:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.405 09:42:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.405 ************************************ 00:08:25.405 START TEST raid_function_test_raid0 00:08:25.405 ************************************ 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60736 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60736' 00:08:25.405 Process raid pid: 60736 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60736 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60736 ']' 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.405 09:42:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:25.405 [2024-10-11 09:42:09.842404] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:25.405 [2024-10-11 09:42:09.842717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.405 [2024-10-11 09:42:10.003286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.662 [2024-10-11 09:42:10.151068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.920 [2024-10-11 09:42:10.375866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.920 [2024-10-11 09:42:10.376033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:26.490 Base_1 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:26.490 Base_2 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:26.490 [2024-10-11 09:42:10.971084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:26.490 [2024-10-11 09:42:10.973202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:26.490 [2024-10-11 09:42:10.973279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.490 [2024-10-11 09:42:10.973292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:26.490 [2024-10-11 09:42:10.973578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.490 [2024-10-11 09:42:10.973730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.490 [2024-10-11 09:42:10.973762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:26.490 [2024-10-11 09:42:10.973947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.490 09:42:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:26.490 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:26.750 [2024-10-11 09:42:11.222756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:26.750 /dev/nbd0 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:26.750 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.750 1+0 records in 00:08:26.750 1+0 records out 00:08:26.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055028 s, 7.4 MB/s 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:26.751 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:27.010 { 00:08:27.010 "nbd_device": "/dev/nbd0", 00:08:27.010 "bdev_name": "raid" 00:08:27.010 } 00:08:27.010 ]' 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:27.010 { 00:08:27.010 "nbd_device": "/dev/nbd0", 00:08:27.010 "bdev_name": "raid" 00:08:27.010 } 00:08:27.010 ]' 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:27.010 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:27.011 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:27.011 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:27.011 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:27.011 4096+0 records in 00:08:27.011 4096+0 records out 00:08:27.011 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0318516 s, 65.8 MB/s 00:08:27.011 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:27.270 4096+0 records in 00:08:27.270 4096+0 records out 00:08:27.270 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.223308 s, 9.4 MB/s 00:08:27.270 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:27.270 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:27.270 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:27.271 128+0 records in 00:08:27.271 128+0 records out 00:08:27.271 65536 bytes (66 kB, 64 KiB) copied, 0.00124008 s, 52.8 MB/s 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:27.271 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:27.530 2035+0 records in 00:08:27.530 2035+0 records out 00:08:27.530 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.013741 s, 75.8 MB/s 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:27.530 456+0 records in 00:08:27.530 456+0 records out 00:08:27.530 233472 bytes (233 kB, 228 KiB) copied, 0.00366565 s, 63.7 MB/s 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:27.530 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:27.531 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:27.531 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:27.531 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:27.531 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.531 09:42:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:27.791 [2024-10-11 09:42:12.207376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:27.791 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60736 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60736 ']' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60736 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60736 00:08:28.057 killing process with pid 60736 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60736' 00:08:28.057 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60736 00:08:28.058 [2024-10-11 09:42:12.582439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.058 [2024-10-11 09:42:12.582563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.058 09:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60736 00:08:28.058 [2024-10-11 09:42:12.582620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.058 [2024-10-11 09:42:12.582633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:28.326 [2024-10-11 09:42:12.799219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.704 09:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:29.704 00:08:29.704 real 0m4.199s 00:08:29.704 user 0m5.017s 00:08:29.704 sys 0m1.038s 00:08:29.704 09:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.704 09:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:29.704 ************************************ 00:08:29.704 END TEST raid_function_test_raid0 00:08:29.704 ************************************ 00:08:29.704 09:42:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:29.704 09:42:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.704 09:42:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.704 09:42:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.704 ************************************ 00:08:29.704 START TEST raid_function_test_concat 00:08:29.704 ************************************ 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60865 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.704 Process raid pid: 60865 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60865' 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60865 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60865 ']' 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.704 09:42:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:29.704 [2024-10-11 09:42:14.105256] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:29.704 [2024-10-11 09:42:14.105445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.704 [2024-10-11 09:42:14.271975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.964 [2024-10-11 09:42:14.399421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.223 [2024-10-11 09:42:14.642090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.223 [2024-10-11 09:42:14.642244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:30.482 Base_1 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.482 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 Base_2 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 [2024-10-11 09:42:15.120990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:30.742 [2024-10-11 09:42:15.122932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:30.742 [2024-10-11 09:42:15.123060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:30.742 [2024-10-11 09:42:15.123111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:30.742 [2024-10-11 09:42:15.123498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:30.742 [2024-10-11 09:42:15.123707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:30.742 [2024-10-11 09:42:15.123761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:30.742 [2024-10-11 09:42:15.123997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:30.742 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:31.001 [2024-10-11 09:42:15.388626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:31.001 /dev/nbd0 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.001 1+0 records in 00:08:31.001 1+0 records out 00:08:31.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301641 s, 13.6 MB/s 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:31.001 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.261 { 00:08:31.261 "nbd_device": "/dev/nbd0", 00:08:31.261 "bdev_name": "raid" 00:08:31.261 } 00:08:31.261 ]' 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.261 { 00:08:31.261 "nbd_device": "/dev/nbd0", 00:08:31.261 "bdev_name": "raid" 00:08:31.261 } 00:08:31.261 ]' 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:31.261 4096+0 records in 00:08:31.261 4096+0 records out 00:08:31.261 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0344568 s, 60.9 MB/s 00:08:31.261 09:42:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:31.519 4096+0 records in 00:08:31.519 4096+0 records out 00:08:31.519 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.221555 s, 9.5 MB/s 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:31.519 128+0 records in 00:08:31.519 128+0 records out 00:08:31.519 65536 bytes (66 kB, 64 KiB) copied, 0.00111947 s, 58.5 MB/s 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:31.519 2035+0 records in 00:08:31.519 2035+0 records out 00:08:31.519 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0149957 s, 69.5 MB/s 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:31.519 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:31.778 456+0 records in 00:08:31.778 456+0 records out 00:08:31.778 233472 bytes (233 kB, 228 KiB) copied, 0.00287118 s, 81.3 MB/s 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.778 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.038 [2024-10-11 09:42:16.415946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:32.038 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60865 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60865 ']' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60865 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60865 00:08:32.297 killing process with pid 60865 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60865' 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60865 00:08:32.297 [2024-10-11 09:42:16.782098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.297 [2024-10-11 09:42:16.782218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.297 [2024-10-11 09:42:16.782276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.297 09:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60865 00:08:32.297 [2024-10-11 09:42:16.782291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:32.556 [2024-10-11 09:42:17.022093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.931 ************************************ 00:08:33.931 END TEST raid_function_test_concat 00:08:33.931 ************************************ 00:08:33.931 09:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:33.931 00:08:33.931 real 0m4.193s 00:08:33.931 user 0m4.966s 00:08:33.931 sys 0m1.022s 00:08:33.931 09:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.931 09:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 09:42:18 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:33.931 09:42:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.931 09:42:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.931 09:42:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 ************************************ 00:08:33.931 START TEST raid0_resize_test 00:08:33.931 ************************************ 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61000 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61000' 00:08:33.931 Process raid pid: 61000 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61000 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 61000 ']' 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.931 09:42:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 [2024-10-11 09:42:18.380860] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:33.931 [2024-10-11 09:42:18.381124] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.931 [2024-10-11 09:42:18.538654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.190 [2024-10-11 09:42:18.685055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.450 [2024-10-11 09:42:18.918210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.450 [2024-10-11 09:42:18.918355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.708 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.708 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:34.708 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:34.708 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.708 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.708 Base_1 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.709 Base_2 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.709 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.709 [2024-10-11 09:42:19.338196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:34.966 [2024-10-11 09:42:19.340130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:34.966 [2024-10-11 09:42:19.340210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:34.966 [2024-10-11 09:42:19.340226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:34.966 [2024-10-11 09:42:19.340502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:34.966 [2024-10-11 09:42:19.340638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:34.966 [2024-10-11 09:42:19.340649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:34.966 [2024-10-11 09:42:19.340825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.966 [2024-10-11 09:42:19.346114] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.966 [2024-10-11 09:42:19.346141] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:34.966 true 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:34.966 [2024-10-11 09:42:19.358282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.966 [2024-10-11 09:42:19.410071] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.966 [2024-10-11 09:42:19.410106] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:34.966 [2024-10-11 09:42:19.410143] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:34.966 true 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:34.966 [2024-10-11 09:42:19.426205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61000 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 61000 ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 61000 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61000 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61000' 00:08:34.966 killing process with pid 61000 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 61000 00:08:34.966 [2024-10-11 09:42:19.478529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.966 [2024-10-11 09:42:19.478693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.966 09:42:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 61000 00:08:34.966 [2024-10-11 09:42:19.478774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.966 [2024-10-11 09:42:19.478788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:34.966 [2024-10-11 09:42:19.496627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.341 09:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:36.341 00:08:36.341 real 0m2.529s 00:08:36.341 user 0m2.725s 00:08:36.341 sys 0m0.351s 00:08:36.341 09:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.341 09:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 ************************************ 00:08:36.341 END TEST raid0_resize_test 00:08:36.341 ************************************ 00:08:36.341 09:42:20 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:36.341 09:42:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:36.341 09:42:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.341 09:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 ************************************ 00:08:36.341 START TEST raid1_resize_test 00:08:36.341 ************************************ 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:36.341 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61056 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61056' 00:08:36.342 Process raid pid: 61056 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61056 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 61056 ']' 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.342 09:42:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 [2024-10-11 09:42:20.963362] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:36.342 [2024-10-11 09:42:20.963590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.601 [2024-10-11 09:42:21.128413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.860 [2024-10-11 09:42:21.260606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.860 [2024-10-11 09:42:21.488896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.860 [2024-10-11 09:42:21.489015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.430 Base_1 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:37.430 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 Base_2 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 [2024-10-11 09:42:21.829793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:37.431 [2024-10-11 09:42:21.831687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:37.431 [2024-10-11 09:42:21.831747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:37.431 [2024-10-11 09:42:21.831774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:37.431 [2024-10-11 09:42:21.832061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:37.431 [2024-10-11 09:42:21.832216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:37.431 [2024-10-11 09:42:21.832232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:37.431 [2024-10-11 09:42:21.832386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 [2024-10-11 09:42:21.841718] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:37.431 [2024-10-11 09:42:21.841806] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:37.431 true 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:37.431 [2024-10-11 09:42:21.853918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 [2024-10-11 09:42:21.901620] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:37.431 [2024-10-11 09:42:21.901691] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:37.431 [2024-10-11 09:42:21.901727] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:37.431 true 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 [2024-10-11 09:42:21.917779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61056 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 61056 ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 61056 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.431 09:42:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61056 00:08:37.431 killing process with pid 61056 00:08:37.431 09:42:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.431 09:42:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.431 09:42:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61056' 00:08:37.431 09:42:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 61056 00:08:37.431 [2024-10-11 09:42:22.004072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.431 [2024-10-11 09:42:22.004182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.431 09:42:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 61056 00:08:37.431 [2024-10-11 09:42:22.004779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.431 [2024-10-11 09:42:22.004876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:37.431 [2024-10-11 09:42:22.022770] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.810 ************************************ 00:08:38.810 END TEST raid1_resize_test 00:08:38.810 ************************************ 00:08:38.810 09:42:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:38.810 00:08:38.810 real 0m2.288s 00:08:38.810 user 0m2.431s 00:08:38.810 sys 0m0.335s 00:08:38.810 09:42:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.810 09:42:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.810 09:42:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:38.810 09:42:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:38.810 09:42:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:38.810 09:42:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:38.810 09:42:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.810 09:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.810 ************************************ 00:08:38.810 START TEST raid_state_function_test 00:08:38.810 ************************************ 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:38.810 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61113 00:08:38.811 Process raid pid: 61113 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61113' 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61113 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61113 ']' 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.811 09:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.811 [2024-10-11 09:42:23.327363] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:38.811 [2024-10-11 09:42:23.327593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.070 [2024-10-11 09:42:23.476793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.070 [2024-10-11 09:42:23.602161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.329 [2024-10-11 09:42:23.838499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.329 [2024-10-11 09:42:23.838645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.590 [2024-10-11 09:42:24.195471] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.590 [2024-10-11 09:42:24.195608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.590 [2024-10-11 09:42:24.195652] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.590 [2024-10-11 09:42:24.195694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.590 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.849 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.849 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.849 "name": "Existed_Raid", 00:08:39.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.849 "strip_size_kb": 64, 00:08:39.849 "state": "configuring", 00:08:39.849 "raid_level": "raid0", 00:08:39.849 "superblock": false, 00:08:39.849 "num_base_bdevs": 2, 00:08:39.849 "num_base_bdevs_discovered": 0, 00:08:39.849 "num_base_bdevs_operational": 2, 00:08:39.849 "base_bdevs_list": [ 00:08:39.849 { 00:08:39.849 "name": "BaseBdev1", 00:08:39.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.849 "is_configured": false, 00:08:39.849 "data_offset": 0, 00:08:39.849 "data_size": 0 00:08:39.849 }, 00:08:39.849 { 00:08:39.849 "name": "BaseBdev2", 00:08:39.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.849 "is_configured": false, 00:08:39.849 "data_offset": 0, 00:08:39.849 "data_size": 0 00:08:39.849 } 00:08:39.849 ] 00:08:39.849 }' 00:08:39.849 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.849 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 [2024-10-11 09:42:24.654622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.109 [2024-10-11 09:42:24.654660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 [2024-10-11 09:42:24.666628] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.109 [2024-10-11 09:42:24.666678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.109 [2024-10-11 09:42:24.666687] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.109 [2024-10-11 09:42:24.666699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 [2024-10-11 09:42:24.720468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.109 BaseBdev1 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.109 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.370 [ 00:08:40.370 { 00:08:40.370 "name": "BaseBdev1", 00:08:40.370 "aliases": [ 00:08:40.370 "9610b794-4f05-4aac-880b-b95e77472af3" 00:08:40.370 ], 00:08:40.370 "product_name": "Malloc disk", 00:08:40.370 "block_size": 512, 00:08:40.370 "num_blocks": 65536, 00:08:40.370 "uuid": "9610b794-4f05-4aac-880b-b95e77472af3", 00:08:40.370 "assigned_rate_limits": { 00:08:40.370 "rw_ios_per_sec": 0, 00:08:40.370 "rw_mbytes_per_sec": 0, 00:08:40.370 "r_mbytes_per_sec": 0, 00:08:40.370 "w_mbytes_per_sec": 0 00:08:40.370 }, 00:08:40.370 "claimed": true, 00:08:40.370 "claim_type": "exclusive_write", 00:08:40.370 "zoned": false, 00:08:40.370 "supported_io_types": { 00:08:40.370 "read": true, 00:08:40.370 "write": true, 00:08:40.370 "unmap": true, 00:08:40.370 "flush": true, 00:08:40.370 "reset": true, 00:08:40.370 "nvme_admin": false, 00:08:40.370 "nvme_io": false, 00:08:40.370 "nvme_io_md": false, 00:08:40.370 "write_zeroes": true, 00:08:40.370 "zcopy": true, 00:08:40.370 "get_zone_info": false, 00:08:40.370 "zone_management": false, 00:08:40.370 "zone_append": false, 00:08:40.370 "compare": false, 00:08:40.370 "compare_and_write": false, 00:08:40.370 "abort": true, 00:08:40.370 "seek_hole": false, 00:08:40.370 "seek_data": false, 00:08:40.370 "copy": true, 00:08:40.370 "nvme_iov_md": false 00:08:40.370 }, 00:08:40.370 "memory_domains": [ 00:08:40.370 { 00:08:40.370 "dma_device_id": "system", 00:08:40.370 "dma_device_type": 1 00:08:40.370 }, 00:08:40.370 { 00:08:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.370 "dma_device_type": 2 00:08:40.370 } 00:08:40.370 ], 00:08:40.370 "driver_specific": {} 00:08:40.370 } 00:08:40.370 ] 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.370 "name": "Existed_Raid", 00:08:40.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.370 "strip_size_kb": 64, 00:08:40.370 "state": "configuring", 00:08:40.370 "raid_level": "raid0", 00:08:40.370 "superblock": false, 00:08:40.370 "num_base_bdevs": 2, 00:08:40.370 "num_base_bdevs_discovered": 1, 00:08:40.370 "num_base_bdevs_operational": 2, 00:08:40.370 "base_bdevs_list": [ 00:08:40.370 { 00:08:40.370 "name": "BaseBdev1", 00:08:40.370 "uuid": "9610b794-4f05-4aac-880b-b95e77472af3", 00:08:40.370 "is_configured": true, 00:08:40.370 "data_offset": 0, 00:08:40.370 "data_size": 65536 00:08:40.370 }, 00:08:40.370 { 00:08:40.370 "name": "BaseBdev2", 00:08:40.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.370 "is_configured": false, 00:08:40.370 "data_offset": 0, 00:08:40.370 "data_size": 0 00:08:40.370 } 00:08:40.370 ] 00:08:40.370 }' 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.370 09:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 [2024-10-11 09:42:25.175834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.630 [2024-10-11 09:42:25.175906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 [2024-10-11 09:42:25.187894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.630 [2024-10-11 09:42:25.190034] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.630 [2024-10-11 09:42:25.190081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.630 "name": "Existed_Raid", 00:08:40.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.630 "strip_size_kb": 64, 00:08:40.630 "state": "configuring", 00:08:40.630 "raid_level": "raid0", 00:08:40.630 "superblock": false, 00:08:40.630 "num_base_bdevs": 2, 00:08:40.630 "num_base_bdevs_discovered": 1, 00:08:40.630 "num_base_bdevs_operational": 2, 00:08:40.630 "base_bdevs_list": [ 00:08:40.630 { 00:08:40.630 "name": "BaseBdev1", 00:08:40.630 "uuid": "9610b794-4f05-4aac-880b-b95e77472af3", 00:08:40.630 "is_configured": true, 00:08:40.630 "data_offset": 0, 00:08:40.630 "data_size": 65536 00:08:40.630 }, 00:08:40.630 { 00:08:40.630 "name": "BaseBdev2", 00:08:40.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.630 "is_configured": false, 00:08:40.630 "data_offset": 0, 00:08:40.630 "data_size": 0 00:08:40.630 } 00:08:40.630 ] 00:08:40.630 }' 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.630 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.197 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.197 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.198 [2024-10-11 09:42:25.709391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.198 [2024-10-11 09:42:25.709445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.198 [2024-10-11 09:42:25.709456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:41.198 [2024-10-11 09:42:25.709727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.198 [2024-10-11 09:42:25.709980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.198 [2024-10-11 09:42:25.709997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.198 [2024-10-11 09:42:25.710310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.198 BaseBdev2 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.198 [ 00:08:41.198 { 00:08:41.198 "name": "BaseBdev2", 00:08:41.198 "aliases": [ 00:08:41.198 "4c372ce1-148d-4a28-9c15-38a9ec6ff247" 00:08:41.198 ], 00:08:41.198 "product_name": "Malloc disk", 00:08:41.198 "block_size": 512, 00:08:41.198 "num_blocks": 65536, 00:08:41.198 "uuid": "4c372ce1-148d-4a28-9c15-38a9ec6ff247", 00:08:41.198 "assigned_rate_limits": { 00:08:41.198 "rw_ios_per_sec": 0, 00:08:41.198 "rw_mbytes_per_sec": 0, 00:08:41.198 "r_mbytes_per_sec": 0, 00:08:41.198 "w_mbytes_per_sec": 0 00:08:41.198 }, 00:08:41.198 "claimed": true, 00:08:41.198 "claim_type": "exclusive_write", 00:08:41.198 "zoned": false, 00:08:41.198 "supported_io_types": { 00:08:41.198 "read": true, 00:08:41.198 "write": true, 00:08:41.198 "unmap": true, 00:08:41.198 "flush": true, 00:08:41.198 "reset": true, 00:08:41.198 "nvme_admin": false, 00:08:41.198 "nvme_io": false, 00:08:41.198 "nvme_io_md": false, 00:08:41.198 "write_zeroes": true, 00:08:41.198 "zcopy": true, 00:08:41.198 "get_zone_info": false, 00:08:41.198 "zone_management": false, 00:08:41.198 "zone_append": false, 00:08:41.198 "compare": false, 00:08:41.198 "compare_and_write": false, 00:08:41.198 "abort": true, 00:08:41.198 "seek_hole": false, 00:08:41.198 "seek_data": false, 00:08:41.198 "copy": true, 00:08:41.198 "nvme_iov_md": false 00:08:41.198 }, 00:08:41.198 "memory_domains": [ 00:08:41.198 { 00:08:41.198 "dma_device_id": "system", 00:08:41.198 "dma_device_type": 1 00:08:41.198 }, 00:08:41.198 { 00:08:41.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.198 "dma_device_type": 2 00:08:41.198 } 00:08:41.198 ], 00:08:41.198 "driver_specific": {} 00:08:41.198 } 00:08:41.198 ] 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.198 "name": "Existed_Raid", 00:08:41.198 "uuid": "ea13d6d7-d462-47d0-b7fc-fddd987d4e65", 00:08:41.198 "strip_size_kb": 64, 00:08:41.198 "state": "online", 00:08:41.198 "raid_level": "raid0", 00:08:41.198 "superblock": false, 00:08:41.198 "num_base_bdevs": 2, 00:08:41.198 "num_base_bdevs_discovered": 2, 00:08:41.198 "num_base_bdevs_operational": 2, 00:08:41.198 "base_bdevs_list": [ 00:08:41.198 { 00:08:41.198 "name": "BaseBdev1", 00:08:41.198 "uuid": "9610b794-4f05-4aac-880b-b95e77472af3", 00:08:41.198 "is_configured": true, 00:08:41.198 "data_offset": 0, 00:08:41.198 "data_size": 65536 00:08:41.198 }, 00:08:41.198 { 00:08:41.198 "name": "BaseBdev2", 00:08:41.198 "uuid": "4c372ce1-148d-4a28-9c15-38a9ec6ff247", 00:08:41.198 "is_configured": true, 00:08:41.198 "data_offset": 0, 00:08:41.198 "data_size": 65536 00:08:41.198 } 00:08:41.198 ] 00:08:41.198 }' 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.198 09:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.768 [2024-10-11 09:42:26.212975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.768 "name": "Existed_Raid", 00:08:41.768 "aliases": [ 00:08:41.768 "ea13d6d7-d462-47d0-b7fc-fddd987d4e65" 00:08:41.768 ], 00:08:41.768 "product_name": "Raid Volume", 00:08:41.768 "block_size": 512, 00:08:41.768 "num_blocks": 131072, 00:08:41.768 "uuid": "ea13d6d7-d462-47d0-b7fc-fddd987d4e65", 00:08:41.768 "assigned_rate_limits": { 00:08:41.768 "rw_ios_per_sec": 0, 00:08:41.768 "rw_mbytes_per_sec": 0, 00:08:41.768 "r_mbytes_per_sec": 0, 00:08:41.768 "w_mbytes_per_sec": 0 00:08:41.768 }, 00:08:41.768 "claimed": false, 00:08:41.768 "zoned": false, 00:08:41.768 "supported_io_types": { 00:08:41.768 "read": true, 00:08:41.768 "write": true, 00:08:41.768 "unmap": true, 00:08:41.768 "flush": true, 00:08:41.768 "reset": true, 00:08:41.768 "nvme_admin": false, 00:08:41.768 "nvme_io": false, 00:08:41.768 "nvme_io_md": false, 00:08:41.768 "write_zeroes": true, 00:08:41.768 "zcopy": false, 00:08:41.768 "get_zone_info": false, 00:08:41.768 "zone_management": false, 00:08:41.768 "zone_append": false, 00:08:41.768 "compare": false, 00:08:41.768 "compare_and_write": false, 00:08:41.768 "abort": false, 00:08:41.768 "seek_hole": false, 00:08:41.768 "seek_data": false, 00:08:41.768 "copy": false, 00:08:41.768 "nvme_iov_md": false 00:08:41.768 }, 00:08:41.768 "memory_domains": [ 00:08:41.768 { 00:08:41.768 "dma_device_id": "system", 00:08:41.768 "dma_device_type": 1 00:08:41.768 }, 00:08:41.768 { 00:08:41.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.768 "dma_device_type": 2 00:08:41.768 }, 00:08:41.768 { 00:08:41.768 "dma_device_id": "system", 00:08:41.768 "dma_device_type": 1 00:08:41.768 }, 00:08:41.768 { 00:08:41.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.768 "dma_device_type": 2 00:08:41.768 } 00:08:41.768 ], 00:08:41.768 "driver_specific": { 00:08:41.768 "raid": { 00:08:41.768 "uuid": "ea13d6d7-d462-47d0-b7fc-fddd987d4e65", 00:08:41.768 "strip_size_kb": 64, 00:08:41.768 "state": "online", 00:08:41.768 "raid_level": "raid0", 00:08:41.768 "superblock": false, 00:08:41.768 "num_base_bdevs": 2, 00:08:41.768 "num_base_bdevs_discovered": 2, 00:08:41.768 "num_base_bdevs_operational": 2, 00:08:41.768 "base_bdevs_list": [ 00:08:41.768 { 00:08:41.768 "name": "BaseBdev1", 00:08:41.768 "uuid": "9610b794-4f05-4aac-880b-b95e77472af3", 00:08:41.768 "is_configured": true, 00:08:41.768 "data_offset": 0, 00:08:41.768 "data_size": 65536 00:08:41.768 }, 00:08:41.768 { 00:08:41.768 "name": "BaseBdev2", 00:08:41.768 "uuid": "4c372ce1-148d-4a28-9c15-38a9ec6ff247", 00:08:41.768 "is_configured": true, 00:08:41.768 "data_offset": 0, 00:08:41.768 "data_size": 65536 00:08:41.768 } 00:08:41.768 ] 00:08:41.768 } 00:08:41.768 } 00:08:41.768 }' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.768 BaseBdev2' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.768 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.028 [2024-10-11 09:42:26.440344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.028 [2024-10-11 09:42:26.440465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.028 [2024-10-11 09:42:26.440561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.028 "name": "Existed_Raid", 00:08:42.028 "uuid": "ea13d6d7-d462-47d0-b7fc-fddd987d4e65", 00:08:42.028 "strip_size_kb": 64, 00:08:42.028 "state": "offline", 00:08:42.028 "raid_level": "raid0", 00:08:42.028 "superblock": false, 00:08:42.028 "num_base_bdevs": 2, 00:08:42.028 "num_base_bdevs_discovered": 1, 00:08:42.028 "num_base_bdevs_operational": 1, 00:08:42.028 "base_bdevs_list": [ 00:08:42.028 { 00:08:42.028 "name": null, 00:08:42.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.028 "is_configured": false, 00:08:42.028 "data_offset": 0, 00:08:42.028 "data_size": 65536 00:08:42.028 }, 00:08:42.028 { 00:08:42.028 "name": "BaseBdev2", 00:08:42.028 "uuid": "4c372ce1-148d-4a28-9c15-38a9ec6ff247", 00:08:42.028 "is_configured": true, 00:08:42.028 "data_offset": 0, 00:08:42.028 "data_size": 65536 00:08:42.028 } 00:08:42.028 ] 00:08:42.028 }' 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.028 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.287 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.287 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.287 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.287 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.287 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.287 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.546 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.546 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.546 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.546 09:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.546 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.546 09:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.546 [2024-10-11 09:42:26.946534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.546 [2024-10-11 09:42:26.946603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61113 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61113 ']' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61113 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61113 00:08:42.546 killing process with pid 61113 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61113' 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61113 00:08:42.546 [2024-10-11 09:42:27.131025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.546 09:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61113 00:08:42.546 [2024-10-11 09:42:27.148892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:43.920 00:08:43.920 real 0m5.098s 00:08:43.920 user 0m7.371s 00:08:43.920 sys 0m0.779s 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.920 ************************************ 00:08:43.920 END TEST raid_state_function_test 00:08:43.920 ************************************ 00:08:43.920 09:42:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:43.920 09:42:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:43.920 09:42:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.920 09:42:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.920 ************************************ 00:08:43.920 START TEST raid_state_function_test_sb 00:08:43.920 ************************************ 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:43.920 Process raid pid: 61366 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61366 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61366' 00:08:43.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61366 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61366 ']' 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.920 09:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.920 [2024-10-11 09:42:28.444882] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:43.920 [2024-10-11 09:42:28.445071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.178 [2024-10-11 09:42:28.608929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.178 [2024-10-11 09:42:28.761945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.435 [2024-10-11 09:42:28.989543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.435 [2024-10-11 09:42:28.989618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 [2024-10-11 09:42:29.415000] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.001 [2024-10-11 09:42:29.415080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.001 [2024-10-11 09:42:29.415094] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.001 [2024-10-11 09:42:29.415108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.001 "name": "Existed_Raid", 00:08:45.001 "uuid": "0f0ce854-a526-418b-8bd7-4a834ab4d278", 00:08:45.001 "strip_size_kb": 64, 00:08:45.001 "state": "configuring", 00:08:45.001 "raid_level": "raid0", 00:08:45.001 "superblock": true, 00:08:45.001 "num_base_bdevs": 2, 00:08:45.001 "num_base_bdevs_discovered": 0, 00:08:45.001 "num_base_bdevs_operational": 2, 00:08:45.001 "base_bdevs_list": [ 00:08:45.001 { 00:08:45.001 "name": "BaseBdev1", 00:08:45.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.001 "is_configured": false, 00:08:45.001 "data_offset": 0, 00:08:45.001 "data_size": 0 00:08:45.001 }, 00:08:45.001 { 00:08:45.001 "name": "BaseBdev2", 00:08:45.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.001 "is_configured": false, 00:08:45.001 "data_offset": 0, 00:08:45.001 "data_size": 0 00:08:45.001 } 00:08:45.001 ] 00:08:45.001 }' 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.001 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.259 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.260 [2024-10-11 09:42:29.802430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.260 [2024-10-11 09:42:29.802487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.260 [2024-10-11 09:42:29.810476] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.260 [2024-10-11 09:42:29.810545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.260 [2024-10-11 09:42:29.810558] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.260 [2024-10-11 09:42:29.810574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.260 [2024-10-11 09:42:29.857035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.260 BaseBdev1 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.260 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.260 [ 00:08:45.260 { 00:08:45.260 "name": "BaseBdev1", 00:08:45.260 "aliases": [ 00:08:45.260 "13337677-a782-44f4-8717-fb50e98a3235" 00:08:45.260 ], 00:08:45.260 "product_name": "Malloc disk", 00:08:45.260 "block_size": 512, 00:08:45.260 "num_blocks": 65536, 00:08:45.260 "uuid": "13337677-a782-44f4-8717-fb50e98a3235", 00:08:45.260 "assigned_rate_limits": { 00:08:45.260 "rw_ios_per_sec": 0, 00:08:45.260 "rw_mbytes_per_sec": 0, 00:08:45.260 "r_mbytes_per_sec": 0, 00:08:45.260 "w_mbytes_per_sec": 0 00:08:45.260 }, 00:08:45.260 "claimed": true, 00:08:45.260 "claim_type": "exclusive_write", 00:08:45.260 "zoned": false, 00:08:45.260 "supported_io_types": { 00:08:45.260 "read": true, 00:08:45.260 "write": true, 00:08:45.260 "unmap": true, 00:08:45.260 "flush": true, 00:08:45.260 "reset": true, 00:08:45.260 "nvme_admin": false, 00:08:45.260 "nvme_io": false, 00:08:45.260 "nvme_io_md": false, 00:08:45.260 "write_zeroes": true, 00:08:45.260 "zcopy": true, 00:08:45.260 "get_zone_info": false, 00:08:45.260 "zone_management": false, 00:08:45.260 "zone_append": false, 00:08:45.260 "compare": false, 00:08:45.260 "compare_and_write": false, 00:08:45.260 "abort": true, 00:08:45.260 "seek_hole": false, 00:08:45.260 "seek_data": false, 00:08:45.260 "copy": true, 00:08:45.260 "nvme_iov_md": false 00:08:45.260 }, 00:08:45.260 "memory_domains": [ 00:08:45.260 { 00:08:45.260 "dma_device_id": "system", 00:08:45.260 "dma_device_type": 1 00:08:45.260 }, 00:08:45.260 { 00:08:45.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.260 "dma_device_type": 2 00:08:45.260 } 00:08:45.260 ], 00:08:45.260 "driver_specific": {} 00:08:45.260 } 00:08:45.260 ] 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.518 "name": "Existed_Raid", 00:08:45.518 "uuid": "b5183489-6fcc-41c7-9a48-f776f38b5402", 00:08:45.518 "strip_size_kb": 64, 00:08:45.518 "state": "configuring", 00:08:45.518 "raid_level": "raid0", 00:08:45.518 "superblock": true, 00:08:45.518 "num_base_bdevs": 2, 00:08:45.518 "num_base_bdevs_discovered": 1, 00:08:45.518 "num_base_bdevs_operational": 2, 00:08:45.518 "base_bdevs_list": [ 00:08:45.518 { 00:08:45.518 "name": "BaseBdev1", 00:08:45.518 "uuid": "13337677-a782-44f4-8717-fb50e98a3235", 00:08:45.518 "is_configured": true, 00:08:45.518 "data_offset": 2048, 00:08:45.518 "data_size": 63488 00:08:45.518 }, 00:08:45.518 { 00:08:45.518 "name": "BaseBdev2", 00:08:45.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.518 "is_configured": false, 00:08:45.518 "data_offset": 0, 00:08:45.518 "data_size": 0 00:08:45.518 } 00:08:45.518 ] 00:08:45.518 }' 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.518 09:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.778 [2024-10-11 09:42:30.292897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.778 [2024-10-11 09:42:30.292965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.778 [2024-10-11 09:42:30.301023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.778 [2024-10-11 09:42:30.303440] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.778 [2024-10-11 09:42:30.303502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.778 "name": "Existed_Raid", 00:08:45.778 "uuid": "951a192c-000f-4756-8a24-a97fef5be5dd", 00:08:45.778 "strip_size_kb": 64, 00:08:45.778 "state": "configuring", 00:08:45.778 "raid_level": "raid0", 00:08:45.778 "superblock": true, 00:08:45.778 "num_base_bdevs": 2, 00:08:45.778 "num_base_bdevs_discovered": 1, 00:08:45.778 "num_base_bdevs_operational": 2, 00:08:45.778 "base_bdevs_list": [ 00:08:45.778 { 00:08:45.778 "name": "BaseBdev1", 00:08:45.778 "uuid": "13337677-a782-44f4-8717-fb50e98a3235", 00:08:45.778 "is_configured": true, 00:08:45.778 "data_offset": 2048, 00:08:45.778 "data_size": 63488 00:08:45.778 }, 00:08:45.778 { 00:08:45.778 "name": "BaseBdev2", 00:08:45.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.778 "is_configured": false, 00:08:45.778 "data_offset": 0, 00:08:45.778 "data_size": 0 00:08:45.778 } 00:08:45.778 ] 00:08:45.778 }' 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.778 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.348 [2024-10-11 09:42:30.823816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.348 [2024-10-11 09:42:30.824121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:46.348 [2024-10-11 09:42:30.824138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:46.348 [2024-10-11 09:42:30.824468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:46.348 [2024-10-11 09:42:30.824656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:46.348 [2024-10-11 09:42:30.824680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:46.348 BaseBdev2 00:08:46.348 [2024-10-11 09:42:30.824871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.348 [ 00:08:46.348 { 00:08:46.348 "name": "BaseBdev2", 00:08:46.348 "aliases": [ 00:08:46.348 "b2c04e41-6a81-497c-83b3-8c75564249a0" 00:08:46.348 ], 00:08:46.348 "product_name": "Malloc disk", 00:08:46.348 "block_size": 512, 00:08:46.348 "num_blocks": 65536, 00:08:46.348 "uuid": "b2c04e41-6a81-497c-83b3-8c75564249a0", 00:08:46.348 "assigned_rate_limits": { 00:08:46.348 "rw_ios_per_sec": 0, 00:08:46.348 "rw_mbytes_per_sec": 0, 00:08:46.348 "r_mbytes_per_sec": 0, 00:08:46.348 "w_mbytes_per_sec": 0 00:08:46.348 }, 00:08:46.348 "claimed": true, 00:08:46.348 "claim_type": "exclusive_write", 00:08:46.348 "zoned": false, 00:08:46.348 "supported_io_types": { 00:08:46.348 "read": true, 00:08:46.348 "write": true, 00:08:46.348 "unmap": true, 00:08:46.348 "flush": true, 00:08:46.348 "reset": true, 00:08:46.348 "nvme_admin": false, 00:08:46.348 "nvme_io": false, 00:08:46.348 "nvme_io_md": false, 00:08:46.348 "write_zeroes": true, 00:08:46.348 "zcopy": true, 00:08:46.348 "get_zone_info": false, 00:08:46.348 "zone_management": false, 00:08:46.348 "zone_append": false, 00:08:46.348 "compare": false, 00:08:46.348 "compare_and_write": false, 00:08:46.348 "abort": true, 00:08:46.348 "seek_hole": false, 00:08:46.348 "seek_data": false, 00:08:46.348 "copy": true, 00:08:46.348 "nvme_iov_md": false 00:08:46.348 }, 00:08:46.348 "memory_domains": [ 00:08:46.348 { 00:08:46.348 "dma_device_id": "system", 00:08:46.348 "dma_device_type": 1 00:08:46.348 }, 00:08:46.348 { 00:08:46.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.348 "dma_device_type": 2 00:08:46.348 } 00:08:46.348 ], 00:08:46.348 "driver_specific": {} 00:08:46.348 } 00:08:46.348 ] 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.348 "name": "Existed_Raid", 00:08:46.348 "uuid": "951a192c-000f-4756-8a24-a97fef5be5dd", 00:08:46.348 "strip_size_kb": 64, 00:08:46.348 "state": "online", 00:08:46.348 "raid_level": "raid0", 00:08:46.348 "superblock": true, 00:08:46.348 "num_base_bdevs": 2, 00:08:46.348 "num_base_bdevs_discovered": 2, 00:08:46.348 "num_base_bdevs_operational": 2, 00:08:46.348 "base_bdevs_list": [ 00:08:46.348 { 00:08:46.348 "name": "BaseBdev1", 00:08:46.348 "uuid": "13337677-a782-44f4-8717-fb50e98a3235", 00:08:46.348 "is_configured": true, 00:08:46.348 "data_offset": 2048, 00:08:46.348 "data_size": 63488 00:08:46.348 }, 00:08:46.348 { 00:08:46.348 "name": "BaseBdev2", 00:08:46.348 "uuid": "b2c04e41-6a81-497c-83b3-8c75564249a0", 00:08:46.348 "is_configured": true, 00:08:46.348 "data_offset": 2048, 00:08:46.348 "data_size": 63488 00:08:46.348 } 00:08:46.348 ] 00:08:46.348 }' 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.348 09:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.957 [2024-10-11 09:42:31.331358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.957 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.957 "name": "Existed_Raid", 00:08:46.957 "aliases": [ 00:08:46.957 "951a192c-000f-4756-8a24-a97fef5be5dd" 00:08:46.957 ], 00:08:46.957 "product_name": "Raid Volume", 00:08:46.957 "block_size": 512, 00:08:46.957 "num_blocks": 126976, 00:08:46.957 "uuid": "951a192c-000f-4756-8a24-a97fef5be5dd", 00:08:46.957 "assigned_rate_limits": { 00:08:46.957 "rw_ios_per_sec": 0, 00:08:46.957 "rw_mbytes_per_sec": 0, 00:08:46.957 "r_mbytes_per_sec": 0, 00:08:46.957 "w_mbytes_per_sec": 0 00:08:46.957 }, 00:08:46.957 "claimed": false, 00:08:46.957 "zoned": false, 00:08:46.957 "supported_io_types": { 00:08:46.957 "read": true, 00:08:46.957 "write": true, 00:08:46.957 "unmap": true, 00:08:46.957 "flush": true, 00:08:46.957 "reset": true, 00:08:46.957 "nvme_admin": false, 00:08:46.957 "nvme_io": false, 00:08:46.957 "nvme_io_md": false, 00:08:46.957 "write_zeroes": true, 00:08:46.957 "zcopy": false, 00:08:46.957 "get_zone_info": false, 00:08:46.957 "zone_management": false, 00:08:46.957 "zone_append": false, 00:08:46.957 "compare": false, 00:08:46.957 "compare_and_write": false, 00:08:46.957 "abort": false, 00:08:46.957 "seek_hole": false, 00:08:46.957 "seek_data": false, 00:08:46.957 "copy": false, 00:08:46.957 "nvme_iov_md": false 00:08:46.957 }, 00:08:46.957 "memory_domains": [ 00:08:46.957 { 00:08:46.957 "dma_device_id": "system", 00:08:46.957 "dma_device_type": 1 00:08:46.957 }, 00:08:46.957 { 00:08:46.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.957 "dma_device_type": 2 00:08:46.957 }, 00:08:46.957 { 00:08:46.957 "dma_device_id": "system", 00:08:46.957 "dma_device_type": 1 00:08:46.957 }, 00:08:46.957 { 00:08:46.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.958 "dma_device_type": 2 00:08:46.958 } 00:08:46.958 ], 00:08:46.958 "driver_specific": { 00:08:46.958 "raid": { 00:08:46.958 "uuid": "951a192c-000f-4756-8a24-a97fef5be5dd", 00:08:46.958 "strip_size_kb": 64, 00:08:46.958 "state": "online", 00:08:46.958 "raid_level": "raid0", 00:08:46.958 "superblock": true, 00:08:46.958 "num_base_bdevs": 2, 00:08:46.958 "num_base_bdevs_discovered": 2, 00:08:46.958 "num_base_bdevs_operational": 2, 00:08:46.958 "base_bdevs_list": [ 00:08:46.958 { 00:08:46.958 "name": "BaseBdev1", 00:08:46.958 "uuid": "13337677-a782-44f4-8717-fb50e98a3235", 00:08:46.958 "is_configured": true, 00:08:46.958 "data_offset": 2048, 00:08:46.958 "data_size": 63488 00:08:46.958 }, 00:08:46.958 { 00:08:46.958 "name": "BaseBdev2", 00:08:46.958 "uuid": "b2c04e41-6a81-497c-83b3-8c75564249a0", 00:08:46.958 "is_configured": true, 00:08:46.958 "data_offset": 2048, 00:08:46.958 "data_size": 63488 00:08:46.958 } 00:08:46.958 ] 00:08:46.958 } 00:08:46.958 } 00:08:46.958 }' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.958 BaseBdev2' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.958 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.958 [2024-10-11 09:42:31.562722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.958 [2024-10-11 09:42:31.562782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.958 [2024-10-11 09:42:31.562842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.217 "name": "Existed_Raid", 00:08:47.217 "uuid": "951a192c-000f-4756-8a24-a97fef5be5dd", 00:08:47.217 "strip_size_kb": 64, 00:08:47.217 "state": "offline", 00:08:47.217 "raid_level": "raid0", 00:08:47.217 "superblock": true, 00:08:47.217 "num_base_bdevs": 2, 00:08:47.217 "num_base_bdevs_discovered": 1, 00:08:47.217 "num_base_bdevs_operational": 1, 00:08:47.217 "base_bdevs_list": [ 00:08:47.217 { 00:08:47.217 "name": null, 00:08:47.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.217 "is_configured": false, 00:08:47.217 "data_offset": 0, 00:08:47.217 "data_size": 63488 00:08:47.217 }, 00:08:47.217 { 00:08:47.217 "name": "BaseBdev2", 00:08:47.217 "uuid": "b2c04e41-6a81-497c-83b3-8c75564249a0", 00:08:47.217 "is_configured": true, 00:08:47.217 "data_offset": 2048, 00:08:47.217 "data_size": 63488 00:08:47.217 } 00:08:47.217 ] 00:08:47.217 }' 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.217 09:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.784 [2024-10-11 09:42:32.164072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.784 [2024-10-11 09:42:32.164235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61366 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61366 ']' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61366 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61366 00:08:47.784 killing process with pid 61366 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.784 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.785 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61366' 00:08:47.785 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61366 00:08:47.785 09:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61366 00:08:47.785 [2024-10-11 09:42:32.353577] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.785 [2024-10-11 09:42:32.372336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.161 09:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.161 00:08:49.161 real 0m5.167s 00:08:49.161 user 0m7.498s 00:08:49.161 sys 0m0.752s 00:08:49.161 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.161 09:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.161 ************************************ 00:08:49.161 END TEST raid_state_function_test_sb 00:08:49.161 ************************************ 00:08:49.161 09:42:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:49.161 09:42:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:49.161 09:42:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.161 09:42:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.161 ************************************ 00:08:49.161 START TEST raid_superblock_test 00:08:49.161 ************************************ 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61625 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61625 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61625 ']' 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.161 09:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.161 [2024-10-11 09:42:33.679579] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:49.161 [2024-10-11 09:42:33.679822] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61625 ] 00:08:49.419 [2024-10-11 09:42:33.847007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.420 [2024-10-11 09:42:33.982851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.677 [2024-10-11 09:42:34.233050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.677 [2024-10-11 09:42:34.233091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.935 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.228 malloc1 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.228 [2024-10-11 09:42:34.620427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.228 [2024-10-11 09:42:34.620556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.228 [2024-10-11 09:42:34.620607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.228 [2024-10-11 09:42:34.620660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.228 [2024-10-11 09:42:34.623068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.228 [2024-10-11 09:42:34.623143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.228 pt1 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.228 malloc2 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.228 [2024-10-11 09:42:34.690659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.228 [2024-10-11 09:42:34.690729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.228 [2024-10-11 09:42:34.690778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:50.228 [2024-10-11 09:42:34.690790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.228 [2024-10-11 09:42:34.693359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.228 [2024-10-11 09:42:34.693502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.228 pt2 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.228 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.229 [2024-10-11 09:42:34.702710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.229 [2024-10-11 09:42:34.704946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.229 [2024-10-11 09:42:34.705132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:50.229 [2024-10-11 09:42:34.705148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:50.229 [2024-10-11 09:42:34.705446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.229 [2024-10-11 09:42:34.705616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:50.229 [2024-10-11 09:42:34.705630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:50.229 [2024-10-11 09:42:34.705828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.229 "name": "raid_bdev1", 00:08:50.229 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:50.229 "strip_size_kb": 64, 00:08:50.229 "state": "online", 00:08:50.229 "raid_level": "raid0", 00:08:50.229 "superblock": true, 00:08:50.229 "num_base_bdevs": 2, 00:08:50.229 "num_base_bdevs_discovered": 2, 00:08:50.229 "num_base_bdevs_operational": 2, 00:08:50.229 "base_bdevs_list": [ 00:08:50.229 { 00:08:50.229 "name": "pt1", 00:08:50.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.229 "is_configured": true, 00:08:50.229 "data_offset": 2048, 00:08:50.229 "data_size": 63488 00:08:50.229 }, 00:08:50.229 { 00:08:50.229 "name": "pt2", 00:08:50.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.229 "is_configured": true, 00:08:50.229 "data_offset": 2048, 00:08:50.229 "data_size": 63488 00:08:50.229 } 00:08:50.229 ] 00:08:50.229 }' 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.229 09:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.796 [2024-10-11 09:42:35.154264] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.796 "name": "raid_bdev1", 00:08:50.796 "aliases": [ 00:08:50.796 "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd" 00:08:50.796 ], 00:08:50.796 "product_name": "Raid Volume", 00:08:50.796 "block_size": 512, 00:08:50.796 "num_blocks": 126976, 00:08:50.796 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:50.796 "assigned_rate_limits": { 00:08:50.796 "rw_ios_per_sec": 0, 00:08:50.796 "rw_mbytes_per_sec": 0, 00:08:50.796 "r_mbytes_per_sec": 0, 00:08:50.796 "w_mbytes_per_sec": 0 00:08:50.796 }, 00:08:50.796 "claimed": false, 00:08:50.796 "zoned": false, 00:08:50.796 "supported_io_types": { 00:08:50.796 "read": true, 00:08:50.796 "write": true, 00:08:50.796 "unmap": true, 00:08:50.796 "flush": true, 00:08:50.796 "reset": true, 00:08:50.796 "nvme_admin": false, 00:08:50.796 "nvme_io": false, 00:08:50.796 "nvme_io_md": false, 00:08:50.796 "write_zeroes": true, 00:08:50.796 "zcopy": false, 00:08:50.796 "get_zone_info": false, 00:08:50.796 "zone_management": false, 00:08:50.796 "zone_append": false, 00:08:50.796 "compare": false, 00:08:50.796 "compare_and_write": false, 00:08:50.796 "abort": false, 00:08:50.796 "seek_hole": false, 00:08:50.796 "seek_data": false, 00:08:50.796 "copy": false, 00:08:50.796 "nvme_iov_md": false 00:08:50.796 }, 00:08:50.796 "memory_domains": [ 00:08:50.796 { 00:08:50.796 "dma_device_id": "system", 00:08:50.796 "dma_device_type": 1 00:08:50.796 }, 00:08:50.796 { 00:08:50.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.796 "dma_device_type": 2 00:08:50.796 }, 00:08:50.796 { 00:08:50.796 "dma_device_id": "system", 00:08:50.796 "dma_device_type": 1 00:08:50.796 }, 00:08:50.796 { 00:08:50.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.796 "dma_device_type": 2 00:08:50.796 } 00:08:50.796 ], 00:08:50.796 "driver_specific": { 00:08:50.796 "raid": { 00:08:50.796 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:50.796 "strip_size_kb": 64, 00:08:50.796 "state": "online", 00:08:50.796 "raid_level": "raid0", 00:08:50.796 "superblock": true, 00:08:50.796 "num_base_bdevs": 2, 00:08:50.796 "num_base_bdevs_discovered": 2, 00:08:50.796 "num_base_bdevs_operational": 2, 00:08:50.796 "base_bdevs_list": [ 00:08:50.796 { 00:08:50.796 "name": "pt1", 00:08:50.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.796 "is_configured": true, 00:08:50.796 "data_offset": 2048, 00:08:50.796 "data_size": 63488 00:08:50.796 }, 00:08:50.796 { 00:08:50.796 "name": "pt2", 00:08:50.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.796 "is_configured": true, 00:08:50.796 "data_offset": 2048, 00:08:50.796 "data_size": 63488 00:08:50.796 } 00:08:50.796 ] 00:08:50.796 } 00:08:50.796 } 00:08:50.796 }' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:50.796 pt2' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.796 [2024-10-11 09:42:35.381835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb5f01ac-93be-47e3-ba88-fbe0593fa8dd 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb5f01ac-93be-47e3-ba88-fbe0593fa8dd ']' 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.796 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.796 [2024-10-11 09:42:35.425448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.054 [2024-10-11 09:42:35.425548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.054 [2024-10-11 09:42:35.425668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.054 [2024-10-11 09:42:35.425726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.054 [2024-10-11 09:42:35.425754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.054 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.054 [2024-10-11 09:42:35.577237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.054 [2024-10-11 09:42:35.579407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.054 [2024-10-11 09:42:35.579493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.054 [2024-10-11 09:42:35.579554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.054 [2024-10-11 09:42:35.579572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.054 [2024-10-11 09:42:35.579599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:51.054 request: 00:08:51.054 { 00:08:51.054 "name": "raid_bdev1", 00:08:51.054 "raid_level": "raid0", 00:08:51.054 "base_bdevs": [ 00:08:51.054 "malloc1", 00:08:51.054 "malloc2" 00:08:51.054 ], 00:08:51.054 "strip_size_kb": 64, 00:08:51.054 "superblock": false, 00:08:51.054 "method": "bdev_raid_create", 00:08:51.054 "req_id": 1 00:08:51.054 } 00:08:51.054 Got JSON-RPC error response 00:08:51.054 response: 00:08:51.054 { 00:08:51.054 "code": -17, 00:08:51.054 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.055 } 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.055 [2024-10-11 09:42:35.645081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.055 [2024-10-11 09:42:35.645215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.055 [2024-10-11 09:42:35.645259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:51.055 [2024-10-11 09:42:35.645342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.055 [2024-10-11 09:42:35.647718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.055 [2024-10-11 09:42:35.647828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.055 [2024-10-11 09:42:35.647982] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.055 [2024-10-11 09:42:35.648076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.055 pt1 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.055 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.313 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.313 "name": "raid_bdev1", 00:08:51.313 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:51.313 "strip_size_kb": 64, 00:08:51.313 "state": "configuring", 00:08:51.313 "raid_level": "raid0", 00:08:51.313 "superblock": true, 00:08:51.313 "num_base_bdevs": 2, 00:08:51.313 "num_base_bdevs_discovered": 1, 00:08:51.313 "num_base_bdevs_operational": 2, 00:08:51.313 "base_bdevs_list": [ 00:08:51.313 { 00:08:51.313 "name": "pt1", 00:08:51.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.313 "is_configured": true, 00:08:51.313 "data_offset": 2048, 00:08:51.313 "data_size": 63488 00:08:51.313 }, 00:08:51.313 { 00:08:51.313 "name": null, 00:08:51.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.313 "is_configured": false, 00:08:51.313 "data_offset": 2048, 00:08:51.313 "data_size": 63488 00:08:51.313 } 00:08:51.313 ] 00:08:51.313 }' 00:08:51.313 09:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.313 09:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.570 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:51.570 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:51.570 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:51.570 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.570 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.570 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.570 [2024-10-11 09:42:36.140246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.570 [2024-10-11 09:42:36.140396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.570 [2024-10-11 09:42:36.140445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:51.570 [2024-10-11 09:42:36.140482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.570 [2024-10-11 09:42:36.141096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.570 [2024-10-11 09:42:36.141164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.571 [2024-10-11 09:42:36.141285] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:51.571 [2024-10-11 09:42:36.141344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.571 [2024-10-11 09:42:36.141468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.571 [2024-10-11 09:42:36.141483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:51.571 [2024-10-11 09:42:36.141753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:51.571 [2024-10-11 09:42:36.141927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.571 [2024-10-11 09:42:36.141939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:51.571 [2024-10-11 09:42:36.142079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.571 pt2 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.571 "name": "raid_bdev1", 00:08:51.571 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:51.571 "strip_size_kb": 64, 00:08:51.571 "state": "online", 00:08:51.571 "raid_level": "raid0", 00:08:51.571 "superblock": true, 00:08:51.571 "num_base_bdevs": 2, 00:08:51.571 "num_base_bdevs_discovered": 2, 00:08:51.571 "num_base_bdevs_operational": 2, 00:08:51.571 "base_bdevs_list": [ 00:08:51.571 { 00:08:51.571 "name": "pt1", 00:08:51.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.571 "is_configured": true, 00:08:51.571 "data_offset": 2048, 00:08:51.571 "data_size": 63488 00:08:51.571 }, 00:08:51.571 { 00:08:51.571 "name": "pt2", 00:08:51.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.571 "is_configured": true, 00:08:51.571 "data_offset": 2048, 00:08:51.571 "data_size": 63488 00:08:51.571 } 00:08:51.571 ] 00:08:51.571 }' 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.571 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.138 [2024-10-11 09:42:36.595788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.138 "name": "raid_bdev1", 00:08:52.138 "aliases": [ 00:08:52.138 "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd" 00:08:52.138 ], 00:08:52.138 "product_name": "Raid Volume", 00:08:52.138 "block_size": 512, 00:08:52.138 "num_blocks": 126976, 00:08:52.138 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:52.138 "assigned_rate_limits": { 00:08:52.138 "rw_ios_per_sec": 0, 00:08:52.138 "rw_mbytes_per_sec": 0, 00:08:52.138 "r_mbytes_per_sec": 0, 00:08:52.138 "w_mbytes_per_sec": 0 00:08:52.138 }, 00:08:52.138 "claimed": false, 00:08:52.138 "zoned": false, 00:08:52.138 "supported_io_types": { 00:08:52.138 "read": true, 00:08:52.138 "write": true, 00:08:52.138 "unmap": true, 00:08:52.138 "flush": true, 00:08:52.138 "reset": true, 00:08:52.138 "nvme_admin": false, 00:08:52.138 "nvme_io": false, 00:08:52.138 "nvme_io_md": false, 00:08:52.138 "write_zeroes": true, 00:08:52.138 "zcopy": false, 00:08:52.138 "get_zone_info": false, 00:08:52.138 "zone_management": false, 00:08:52.138 "zone_append": false, 00:08:52.138 "compare": false, 00:08:52.138 "compare_and_write": false, 00:08:52.138 "abort": false, 00:08:52.138 "seek_hole": false, 00:08:52.138 "seek_data": false, 00:08:52.138 "copy": false, 00:08:52.138 "nvme_iov_md": false 00:08:52.138 }, 00:08:52.138 "memory_domains": [ 00:08:52.138 { 00:08:52.138 "dma_device_id": "system", 00:08:52.138 "dma_device_type": 1 00:08:52.138 }, 00:08:52.138 { 00:08:52.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.138 "dma_device_type": 2 00:08:52.138 }, 00:08:52.138 { 00:08:52.138 "dma_device_id": "system", 00:08:52.138 "dma_device_type": 1 00:08:52.138 }, 00:08:52.138 { 00:08:52.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.138 "dma_device_type": 2 00:08:52.138 } 00:08:52.138 ], 00:08:52.138 "driver_specific": { 00:08:52.138 "raid": { 00:08:52.138 "uuid": "bb5f01ac-93be-47e3-ba88-fbe0593fa8dd", 00:08:52.138 "strip_size_kb": 64, 00:08:52.138 "state": "online", 00:08:52.138 "raid_level": "raid0", 00:08:52.138 "superblock": true, 00:08:52.138 "num_base_bdevs": 2, 00:08:52.138 "num_base_bdevs_discovered": 2, 00:08:52.138 "num_base_bdevs_operational": 2, 00:08:52.138 "base_bdevs_list": [ 00:08:52.138 { 00:08:52.138 "name": "pt1", 00:08:52.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.138 "is_configured": true, 00:08:52.138 "data_offset": 2048, 00:08:52.138 "data_size": 63488 00:08:52.138 }, 00:08:52.138 { 00:08:52.138 "name": "pt2", 00:08:52.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.138 "is_configured": true, 00:08:52.138 "data_offset": 2048, 00:08:52.138 "data_size": 63488 00:08:52.138 } 00:08:52.138 ] 00:08:52.138 } 00:08:52.138 } 00:08:52.138 }' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.138 pt2' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.138 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:52.397 [2024-10-11 09:42:36.827388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb5f01ac-93be-47e3-ba88-fbe0593fa8dd '!=' bb5f01ac-93be-47e3-ba88-fbe0593fa8dd ']' 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:52.397 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61625 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61625 ']' 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61625 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61625 00:08:52.398 killing process with pid 61625 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61625' 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61625 00:08:52.398 [2024-10-11 09:42:36.914411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.398 09:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61625 00:08:52.398 [2024-10-11 09:42:36.914523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.398 [2024-10-11 09:42:36.914580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.398 [2024-10-11 09:42:36.914599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:52.656 [2024-10-11 09:42:37.131928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.031 ************************************ 00:08:54.031 END TEST raid_superblock_test 00:08:54.031 ************************************ 00:08:54.031 09:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:54.031 00:08:54.031 real 0m4.714s 00:08:54.031 user 0m6.683s 00:08:54.031 sys 0m0.713s 00:08:54.031 09:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.031 09:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.031 09:42:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:54.031 09:42:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:54.031 09:42:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.031 09:42:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.031 ************************************ 00:08:54.031 START TEST raid_read_error_test 00:08:54.031 ************************************ 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YDaLHiHhOC 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61831 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61831 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61831 ']' 00:08:54.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.031 09:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.031 [2024-10-11 09:42:38.480342] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:54.031 [2024-10-11 09:42:38.480558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:08:54.031 [2024-10-11 09:42:38.648684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.290 [2024-10-11 09:42:38.782879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.549 [2024-10-11 09:42:39.009179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.549 [2024-10-11 09:42:39.009249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.807 BaseBdev1_malloc 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.807 true 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.807 [2024-10-11 09:42:39.427417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.807 [2024-10-11 09:42:39.427527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.807 [2024-10-11 09:42:39.427556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.807 [2024-10-11 09:42:39.427568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.807 [2024-10-11 09:42:39.429764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.807 [2024-10-11 09:42:39.429805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.807 BaseBdev1 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.807 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.066 BaseBdev2_malloc 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.066 true 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.066 [2024-10-11 09:42:39.499136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:55.066 [2024-10-11 09:42:39.499196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.066 [2024-10-11 09:42:39.499214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:55.066 [2024-10-11 09:42:39.499225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.066 [2024-10-11 09:42:39.501675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.066 [2024-10-11 09:42:39.501787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:55.066 BaseBdev2 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.066 [2024-10-11 09:42:39.511262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.066 [2024-10-11 09:42:39.513545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.066 [2024-10-11 09:42:39.513867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.066 [2024-10-11 09:42:39.513939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:55.066 [2024-10-11 09:42:39.514292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:55.066 [2024-10-11 09:42:39.514565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.066 [2024-10-11 09:42:39.514636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:55.066 [2024-10-11 09:42:39.514979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.066 "name": "raid_bdev1", 00:08:55.066 "uuid": "458e2834-45a2-42c8-a7ef-2f3b7728e206", 00:08:55.066 "strip_size_kb": 64, 00:08:55.066 "state": "online", 00:08:55.066 "raid_level": "raid0", 00:08:55.066 "superblock": true, 00:08:55.066 "num_base_bdevs": 2, 00:08:55.066 "num_base_bdevs_discovered": 2, 00:08:55.066 "num_base_bdevs_operational": 2, 00:08:55.066 "base_bdevs_list": [ 00:08:55.066 { 00:08:55.066 "name": "BaseBdev1", 00:08:55.066 "uuid": "0a79c966-9ced-5bcd-9993-b5639dc0c06c", 00:08:55.066 "is_configured": true, 00:08:55.066 "data_offset": 2048, 00:08:55.066 "data_size": 63488 00:08:55.066 }, 00:08:55.066 { 00:08:55.066 "name": "BaseBdev2", 00:08:55.066 "uuid": "5eb4abaf-a829-5fa1-9dc1-7732b77f682a", 00:08:55.066 "is_configured": true, 00:08:55.066 "data_offset": 2048, 00:08:55.066 "data_size": 63488 00:08:55.066 } 00:08:55.066 ] 00:08:55.066 }' 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.066 09:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.632 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:55.632 09:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:55.632 [2024-10-11 09:42:40.096054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.567 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.568 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.568 09:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.568 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.568 09:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.568 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.568 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.568 "name": "raid_bdev1", 00:08:56.568 "uuid": "458e2834-45a2-42c8-a7ef-2f3b7728e206", 00:08:56.568 "strip_size_kb": 64, 00:08:56.568 "state": "online", 00:08:56.568 "raid_level": "raid0", 00:08:56.568 "superblock": true, 00:08:56.568 "num_base_bdevs": 2, 00:08:56.568 "num_base_bdevs_discovered": 2, 00:08:56.568 "num_base_bdevs_operational": 2, 00:08:56.568 "base_bdevs_list": [ 00:08:56.568 { 00:08:56.568 "name": "BaseBdev1", 00:08:56.568 "uuid": "0a79c966-9ced-5bcd-9993-b5639dc0c06c", 00:08:56.568 "is_configured": true, 00:08:56.568 "data_offset": 2048, 00:08:56.568 "data_size": 63488 00:08:56.568 }, 00:08:56.568 { 00:08:56.568 "name": "BaseBdev2", 00:08:56.568 "uuid": "5eb4abaf-a829-5fa1-9dc1-7732b77f682a", 00:08:56.568 "is_configured": true, 00:08:56.568 "data_offset": 2048, 00:08:56.568 "data_size": 63488 00:08:56.568 } 00:08:56.568 ] 00:08:56.568 }' 00:08:56.568 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.568 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.136 [2024-10-11 09:42:41.485160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.136 [2024-10-11 09:42:41.485274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.136 [2024-10-11 09:42:41.488558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.136 [2024-10-11 09:42:41.488658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.136 [2024-10-11 09:42:41.488718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.136 [2024-10-11 09:42:41.488794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:57.136 { 00:08:57.136 "results": [ 00:08:57.136 { 00:08:57.136 "job": "raid_bdev1", 00:08:57.136 "core_mask": "0x1", 00:08:57.136 "workload": "randrw", 00:08:57.136 "percentage": 50, 00:08:57.136 "status": "finished", 00:08:57.136 "queue_depth": 1, 00:08:57.136 "io_size": 131072, 00:08:57.136 "runtime": 1.389781, 00:08:57.136 "iops": 13233.020166486662, 00:08:57.136 "mibps": 1654.1275208108327, 00:08:57.136 "io_failed": 1, 00:08:57.136 "io_timeout": 0, 00:08:57.136 "avg_latency_us": 104.58032598186794, 00:08:57.136 "min_latency_us": 33.53711790393013, 00:08:57.136 "max_latency_us": 1767.1825327510917 00:08:57.136 } 00:08:57.136 ], 00:08:57.136 "core_count": 1 00:08:57.136 } 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61831 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61831 ']' 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61831 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61831 00:08:57.136 killing process with pid 61831 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61831' 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61831 00:08:57.136 [2024-10-11 09:42:41.528963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.136 09:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61831 00:08:57.136 [2024-10-11 09:42:41.693341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YDaLHiHhOC 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:58.514 ************************************ 00:08:58.514 END TEST raid_read_error_test 00:08:58.514 ************************************ 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:58.514 00:08:58.514 real 0m4.660s 00:08:58.514 user 0m5.671s 00:08:58.514 sys 0m0.532s 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.514 09:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.514 09:42:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:58.514 09:42:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:58.514 09:42:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.514 09:42:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.514 ************************************ 00:08:58.514 START TEST raid_write_error_test 00:08:58.514 ************************************ 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O1gzSjPIxe 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61982 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61982 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61982 ']' 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.514 09:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.774 [2024-10-11 09:42:43.192294] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:58.774 [2024-10-11 09:42:43.192542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61982 ] 00:08:58.774 [2024-10-11 09:42:43.360928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.047 [2024-10-11 09:42:43.502570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.306 [2024-10-11 09:42:43.757665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.306 [2024-10-11 09:42:43.757849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.565 BaseBdev1_malloc 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.565 true 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.565 [2024-10-11 09:42:44.180129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.565 [2024-10-11 09:42:44.180197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.565 [2024-10-11 09:42:44.180223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.565 [2024-10-11 09:42:44.180236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.565 [2024-10-11 09:42:44.182810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.565 [2024-10-11 09:42:44.182855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.565 BaseBdev1 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.565 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 BaseBdev2_malloc 00:08:59.824 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.824 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.824 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.824 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.824 true 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.825 [2024-10-11 09:42:44.252246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.825 [2024-10-11 09:42:44.252404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.825 [2024-10-11 09:42:44.252449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.825 [2024-10-11 09:42:44.252511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.825 [2024-10-11 09:42:44.255093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.825 [2024-10-11 09:42:44.255199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.825 BaseBdev2 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.825 [2024-10-11 09:42:44.264296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.825 [2024-10-11 09:42:44.266474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.825 [2024-10-11 09:42:44.266809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.825 [2024-10-11 09:42:44.266834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:59.825 [2024-10-11 09:42:44.267153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:59.825 [2024-10-11 09:42:44.267367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.825 [2024-10-11 09:42:44.267382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.825 [2024-10-11 09:42:44.267581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.825 "name": "raid_bdev1", 00:08:59.825 "uuid": "cd298be1-996c-4436-bec8-f9c1bce72164", 00:08:59.825 "strip_size_kb": 64, 00:08:59.825 "state": "online", 00:08:59.825 "raid_level": "raid0", 00:08:59.825 "superblock": true, 00:08:59.825 "num_base_bdevs": 2, 00:08:59.825 "num_base_bdevs_discovered": 2, 00:08:59.825 "num_base_bdevs_operational": 2, 00:08:59.825 "base_bdevs_list": [ 00:08:59.825 { 00:08:59.825 "name": "BaseBdev1", 00:08:59.825 "uuid": "290fec7a-4ed2-5c6b-a8e2-c050e5d2adb5", 00:08:59.825 "is_configured": true, 00:08:59.825 "data_offset": 2048, 00:08:59.825 "data_size": 63488 00:08:59.825 }, 00:08:59.825 { 00:08:59.825 "name": "BaseBdev2", 00:08:59.825 "uuid": "a919a235-5d75-5505-91e4-ca14c488fa91", 00:08:59.825 "is_configured": true, 00:08:59.825 "data_offset": 2048, 00:08:59.825 "data_size": 63488 00:08:59.825 } 00:08:59.825 ] 00:08:59.825 }' 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.825 09:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.394 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:00.394 09:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:00.394 [2024-10-11 09:42:44.853021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.330 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.331 "name": "raid_bdev1", 00:09:01.331 "uuid": "cd298be1-996c-4436-bec8-f9c1bce72164", 00:09:01.331 "strip_size_kb": 64, 00:09:01.331 "state": "online", 00:09:01.331 "raid_level": "raid0", 00:09:01.331 "superblock": true, 00:09:01.331 "num_base_bdevs": 2, 00:09:01.331 "num_base_bdevs_discovered": 2, 00:09:01.331 "num_base_bdevs_operational": 2, 00:09:01.331 "base_bdevs_list": [ 00:09:01.331 { 00:09:01.331 "name": "BaseBdev1", 00:09:01.331 "uuid": "290fec7a-4ed2-5c6b-a8e2-c050e5d2adb5", 00:09:01.331 "is_configured": true, 00:09:01.331 "data_offset": 2048, 00:09:01.331 "data_size": 63488 00:09:01.331 }, 00:09:01.331 { 00:09:01.331 "name": "BaseBdev2", 00:09:01.331 "uuid": "a919a235-5d75-5505-91e4-ca14c488fa91", 00:09:01.331 "is_configured": true, 00:09:01.331 "data_offset": 2048, 00:09:01.331 "data_size": 63488 00:09:01.331 } 00:09:01.331 ] 00:09:01.331 }' 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.331 09:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.896 [2024-10-11 09:42:46.225585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.896 [2024-10-11 09:42:46.225626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.896 [2024-10-11 09:42:46.229062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.896 [2024-10-11 09:42:46.229164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.896 [2024-10-11 09:42:46.229206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.896 [2024-10-11 09:42:46.229222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:01.896 { 00:09:01.896 "results": [ 00:09:01.896 { 00:09:01.896 "job": "raid_bdev1", 00:09:01.896 "core_mask": "0x1", 00:09:01.896 "workload": "randrw", 00:09:01.896 "percentage": 50, 00:09:01.896 "status": "finished", 00:09:01.896 "queue_depth": 1, 00:09:01.896 "io_size": 131072, 00:09:01.896 "runtime": 1.373037, 00:09:01.896 "iops": 13730.875424333066, 00:09:01.896 "mibps": 1716.3594280416332, 00:09:01.896 "io_failed": 1, 00:09:01.896 "io_timeout": 0, 00:09:01.896 "avg_latency_us": 101.04573771425845, 00:09:01.896 "min_latency_us": 28.618340611353712, 00:09:01.896 "max_latency_us": 1638.4 00:09:01.896 } 00:09:01.896 ], 00:09:01.896 "core_count": 1 00:09:01.896 } 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61982 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61982 ']' 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61982 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61982 00:09:01.896 killing process with pid 61982 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61982' 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61982 00:09:01.896 [2024-10-11 09:42:46.265189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.896 09:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61982 00:09:01.896 [2024-10-11 09:42:46.406832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O1gzSjPIxe 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:03.274 ************************************ 00:09:03.274 END TEST raid_write_error_test 00:09:03.274 ************************************ 00:09:03.274 00:09:03.274 real 0m4.595s 00:09:03.274 user 0m5.597s 00:09:03.274 sys 0m0.556s 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.274 09:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 09:42:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:03.274 09:42:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:03.274 09:42:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.274 09:42:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.274 09:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 ************************************ 00:09:03.274 START TEST raid_state_function_test 00:09:03.274 ************************************ 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62120 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.274 Process raid pid: 62120 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62120' 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62120 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62120 ']' 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.274 09:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.274 [2024-10-11 09:42:47.845708] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:03.274 [2024-10-11 09:42:47.845871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.533 [2024-10-11 09:42:48.015250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.533 [2024-10-11 09:42:48.152717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.791 [2024-10-11 09:42:48.398278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.792 [2024-10-11 09:42:48.398335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.359 [2024-10-11 09:42:48.751073] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.359 [2024-10-11 09:42:48.751212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.359 [2024-10-11 09:42:48.751228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.359 [2024-10-11 09:42:48.751240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.359 "name": "Existed_Raid", 00:09:04.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.359 "strip_size_kb": 64, 00:09:04.359 "state": "configuring", 00:09:04.359 "raid_level": "concat", 00:09:04.359 "superblock": false, 00:09:04.359 "num_base_bdevs": 2, 00:09:04.359 "num_base_bdevs_discovered": 0, 00:09:04.359 "num_base_bdevs_operational": 2, 00:09:04.359 "base_bdevs_list": [ 00:09:04.359 { 00:09:04.359 "name": "BaseBdev1", 00:09:04.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.359 "is_configured": false, 00:09:04.359 "data_offset": 0, 00:09:04.359 "data_size": 0 00:09:04.359 }, 00:09:04.359 { 00:09:04.359 "name": "BaseBdev2", 00:09:04.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.359 "is_configured": false, 00:09:04.359 "data_offset": 0, 00:09:04.359 "data_size": 0 00:09:04.359 } 00:09:04.359 ] 00:09:04.359 }' 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.359 09:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.618 [2024-10-11 09:42:49.238206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.618 [2024-10-11 09:42:49.238308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.618 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.877 [2024-10-11 09:42:49.254212] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.877 [2024-10-11 09:42:49.254303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.877 [2024-10-11 09:42:49.254334] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.877 [2024-10-11 09:42:49.254363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.877 [2024-10-11 09:42:49.309540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.877 BaseBdev1 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.877 [ 00:09:04.877 { 00:09:04.877 "name": "BaseBdev1", 00:09:04.877 "aliases": [ 00:09:04.877 "713d80be-5cff-4dc0-b68d-00eb2a971097" 00:09:04.877 ], 00:09:04.877 "product_name": "Malloc disk", 00:09:04.877 "block_size": 512, 00:09:04.877 "num_blocks": 65536, 00:09:04.877 "uuid": "713d80be-5cff-4dc0-b68d-00eb2a971097", 00:09:04.877 "assigned_rate_limits": { 00:09:04.877 "rw_ios_per_sec": 0, 00:09:04.877 "rw_mbytes_per_sec": 0, 00:09:04.877 "r_mbytes_per_sec": 0, 00:09:04.877 "w_mbytes_per_sec": 0 00:09:04.877 }, 00:09:04.877 "claimed": true, 00:09:04.877 "claim_type": "exclusive_write", 00:09:04.877 "zoned": false, 00:09:04.877 "supported_io_types": { 00:09:04.877 "read": true, 00:09:04.877 "write": true, 00:09:04.877 "unmap": true, 00:09:04.877 "flush": true, 00:09:04.877 "reset": true, 00:09:04.877 "nvme_admin": false, 00:09:04.877 "nvme_io": false, 00:09:04.877 "nvme_io_md": false, 00:09:04.877 "write_zeroes": true, 00:09:04.877 "zcopy": true, 00:09:04.877 "get_zone_info": false, 00:09:04.877 "zone_management": false, 00:09:04.877 "zone_append": false, 00:09:04.877 "compare": false, 00:09:04.877 "compare_and_write": false, 00:09:04.877 "abort": true, 00:09:04.877 "seek_hole": false, 00:09:04.877 "seek_data": false, 00:09:04.877 "copy": true, 00:09:04.877 "nvme_iov_md": false 00:09:04.877 }, 00:09:04.877 "memory_domains": [ 00:09:04.877 { 00:09:04.877 "dma_device_id": "system", 00:09:04.877 "dma_device_type": 1 00:09:04.877 }, 00:09:04.877 { 00:09:04.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.877 "dma_device_type": 2 00:09:04.877 } 00:09:04.877 ], 00:09:04.877 "driver_specific": {} 00:09:04.877 } 00:09:04.877 ] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.877 "name": "Existed_Raid", 00:09:04.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.877 "strip_size_kb": 64, 00:09:04.877 "state": "configuring", 00:09:04.877 "raid_level": "concat", 00:09:04.877 "superblock": false, 00:09:04.877 "num_base_bdevs": 2, 00:09:04.877 "num_base_bdevs_discovered": 1, 00:09:04.877 "num_base_bdevs_operational": 2, 00:09:04.877 "base_bdevs_list": [ 00:09:04.877 { 00:09:04.877 "name": "BaseBdev1", 00:09:04.877 "uuid": "713d80be-5cff-4dc0-b68d-00eb2a971097", 00:09:04.877 "is_configured": true, 00:09:04.877 "data_offset": 0, 00:09:04.877 "data_size": 65536 00:09:04.877 }, 00:09:04.877 { 00:09:04.877 "name": "BaseBdev2", 00:09:04.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.877 "is_configured": false, 00:09:04.877 "data_offset": 0, 00:09:04.877 "data_size": 0 00:09:04.877 } 00:09:04.877 ] 00:09:04.877 }' 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.877 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.444 [2024-10-11 09:42:49.808832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.444 [2024-10-11 09:42:49.808969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.444 [2024-10-11 09:42:49.816904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.444 [2024-10-11 09:42:49.819170] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.444 [2024-10-11 09:42:49.819285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.444 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.445 "name": "Existed_Raid", 00:09:05.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.445 "strip_size_kb": 64, 00:09:05.445 "state": "configuring", 00:09:05.445 "raid_level": "concat", 00:09:05.445 "superblock": false, 00:09:05.445 "num_base_bdevs": 2, 00:09:05.445 "num_base_bdevs_discovered": 1, 00:09:05.445 "num_base_bdevs_operational": 2, 00:09:05.445 "base_bdevs_list": [ 00:09:05.445 { 00:09:05.445 "name": "BaseBdev1", 00:09:05.445 "uuid": "713d80be-5cff-4dc0-b68d-00eb2a971097", 00:09:05.445 "is_configured": true, 00:09:05.445 "data_offset": 0, 00:09:05.445 "data_size": 65536 00:09:05.445 }, 00:09:05.445 { 00:09:05.445 "name": "BaseBdev2", 00:09:05.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.445 "is_configured": false, 00:09:05.445 "data_offset": 0, 00:09:05.445 "data_size": 0 00:09:05.445 } 00:09:05.445 ] 00:09:05.445 }' 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.445 09:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.705 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.705 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.705 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 [2024-10-11 09:42:50.348459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.964 [2024-10-11 09:42:50.348512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.964 [2024-10-11 09:42:50.348520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:05.964 [2024-10-11 09:42:50.348834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:05.964 [2024-10-11 09:42:50.349019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.964 [2024-10-11 09:42:50.349083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:05.964 [2024-10-11 09:42:50.349394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.964 BaseBdev2 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 [ 00:09:05.964 { 00:09:05.964 "name": "BaseBdev2", 00:09:05.964 "aliases": [ 00:09:05.964 "07282919-37b4-4897-ba54-1ab1dd52ca1f" 00:09:05.964 ], 00:09:05.964 "product_name": "Malloc disk", 00:09:05.964 "block_size": 512, 00:09:05.964 "num_blocks": 65536, 00:09:05.964 "uuid": "07282919-37b4-4897-ba54-1ab1dd52ca1f", 00:09:05.964 "assigned_rate_limits": { 00:09:05.964 "rw_ios_per_sec": 0, 00:09:05.964 "rw_mbytes_per_sec": 0, 00:09:05.964 "r_mbytes_per_sec": 0, 00:09:05.964 "w_mbytes_per_sec": 0 00:09:05.964 }, 00:09:05.964 "claimed": true, 00:09:05.964 "claim_type": "exclusive_write", 00:09:05.964 "zoned": false, 00:09:05.964 "supported_io_types": { 00:09:05.964 "read": true, 00:09:05.964 "write": true, 00:09:05.964 "unmap": true, 00:09:05.964 "flush": true, 00:09:05.964 "reset": true, 00:09:05.964 "nvme_admin": false, 00:09:05.964 "nvme_io": false, 00:09:05.964 "nvme_io_md": false, 00:09:05.964 "write_zeroes": true, 00:09:05.964 "zcopy": true, 00:09:05.964 "get_zone_info": false, 00:09:05.964 "zone_management": false, 00:09:05.964 "zone_append": false, 00:09:05.964 "compare": false, 00:09:05.964 "compare_and_write": false, 00:09:05.964 "abort": true, 00:09:05.964 "seek_hole": false, 00:09:05.964 "seek_data": false, 00:09:05.964 "copy": true, 00:09:05.964 "nvme_iov_md": false 00:09:05.964 }, 00:09:05.964 "memory_domains": [ 00:09:05.964 { 00:09:05.964 "dma_device_id": "system", 00:09:05.964 "dma_device_type": 1 00:09:05.964 }, 00:09:05.964 { 00:09:05.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.964 "dma_device_type": 2 00:09:05.964 } 00:09:05.964 ], 00:09:05.964 "driver_specific": {} 00:09:05.964 } 00:09:05.964 ] 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.964 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.964 "name": "Existed_Raid", 00:09:05.964 "uuid": "17fe9ff0-3dd0-497b-aa31-f0d5a0d5b78e", 00:09:05.964 "strip_size_kb": 64, 00:09:05.964 "state": "online", 00:09:05.964 "raid_level": "concat", 00:09:05.964 "superblock": false, 00:09:05.964 "num_base_bdevs": 2, 00:09:05.964 "num_base_bdevs_discovered": 2, 00:09:05.964 "num_base_bdevs_operational": 2, 00:09:05.964 "base_bdevs_list": [ 00:09:05.964 { 00:09:05.964 "name": "BaseBdev1", 00:09:05.964 "uuid": "713d80be-5cff-4dc0-b68d-00eb2a971097", 00:09:05.964 "is_configured": true, 00:09:05.964 "data_offset": 0, 00:09:05.964 "data_size": 65536 00:09:05.964 }, 00:09:05.964 { 00:09:05.964 "name": "BaseBdev2", 00:09:05.964 "uuid": "07282919-37b4-4897-ba54-1ab1dd52ca1f", 00:09:05.965 "is_configured": true, 00:09:05.965 "data_offset": 0, 00:09:05.965 "data_size": 65536 00:09:05.965 } 00:09:05.965 ] 00:09:05.965 }' 00:09:05.965 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.965 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.224 [2024-10-11 09:42:50.804144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.224 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.224 "name": "Existed_Raid", 00:09:06.224 "aliases": [ 00:09:06.224 "17fe9ff0-3dd0-497b-aa31-f0d5a0d5b78e" 00:09:06.224 ], 00:09:06.224 "product_name": "Raid Volume", 00:09:06.224 "block_size": 512, 00:09:06.224 "num_blocks": 131072, 00:09:06.224 "uuid": "17fe9ff0-3dd0-497b-aa31-f0d5a0d5b78e", 00:09:06.224 "assigned_rate_limits": { 00:09:06.224 "rw_ios_per_sec": 0, 00:09:06.224 "rw_mbytes_per_sec": 0, 00:09:06.224 "r_mbytes_per_sec": 0, 00:09:06.224 "w_mbytes_per_sec": 0 00:09:06.224 }, 00:09:06.224 "claimed": false, 00:09:06.224 "zoned": false, 00:09:06.224 "supported_io_types": { 00:09:06.224 "read": true, 00:09:06.224 "write": true, 00:09:06.224 "unmap": true, 00:09:06.224 "flush": true, 00:09:06.224 "reset": true, 00:09:06.224 "nvme_admin": false, 00:09:06.224 "nvme_io": false, 00:09:06.224 "nvme_io_md": false, 00:09:06.224 "write_zeroes": true, 00:09:06.224 "zcopy": false, 00:09:06.224 "get_zone_info": false, 00:09:06.224 "zone_management": false, 00:09:06.224 "zone_append": false, 00:09:06.224 "compare": false, 00:09:06.224 "compare_and_write": false, 00:09:06.224 "abort": false, 00:09:06.224 "seek_hole": false, 00:09:06.224 "seek_data": false, 00:09:06.224 "copy": false, 00:09:06.224 "nvme_iov_md": false 00:09:06.224 }, 00:09:06.224 "memory_domains": [ 00:09:06.224 { 00:09:06.224 "dma_device_id": "system", 00:09:06.224 "dma_device_type": 1 00:09:06.224 }, 00:09:06.224 { 00:09:06.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.224 "dma_device_type": 2 00:09:06.224 }, 00:09:06.224 { 00:09:06.224 "dma_device_id": "system", 00:09:06.224 "dma_device_type": 1 00:09:06.224 }, 00:09:06.224 { 00:09:06.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.224 "dma_device_type": 2 00:09:06.224 } 00:09:06.224 ], 00:09:06.224 "driver_specific": { 00:09:06.224 "raid": { 00:09:06.224 "uuid": "17fe9ff0-3dd0-497b-aa31-f0d5a0d5b78e", 00:09:06.224 "strip_size_kb": 64, 00:09:06.224 "state": "online", 00:09:06.224 "raid_level": "concat", 00:09:06.224 "superblock": false, 00:09:06.224 "num_base_bdevs": 2, 00:09:06.224 "num_base_bdevs_discovered": 2, 00:09:06.224 "num_base_bdevs_operational": 2, 00:09:06.224 "base_bdevs_list": [ 00:09:06.224 { 00:09:06.224 "name": "BaseBdev1", 00:09:06.224 "uuid": "713d80be-5cff-4dc0-b68d-00eb2a971097", 00:09:06.225 "is_configured": true, 00:09:06.225 "data_offset": 0, 00:09:06.225 "data_size": 65536 00:09:06.225 }, 00:09:06.225 { 00:09:06.225 "name": "BaseBdev2", 00:09:06.225 "uuid": "07282919-37b4-4897-ba54-1ab1dd52ca1f", 00:09:06.225 "is_configured": true, 00:09:06.225 "data_offset": 0, 00:09:06.225 "data_size": 65536 00:09:06.225 } 00:09:06.225 ] 00:09:06.225 } 00:09:06.225 } 00:09:06.225 }' 00:09:06.225 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.485 BaseBdev2' 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.485 09:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.485 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.485 [2024-10-11 09:42:51.059447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.485 [2024-10-11 09:42:51.059487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.485 [2024-10-11 09:42:51.059547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.745 "name": "Existed_Raid", 00:09:06.745 "uuid": "17fe9ff0-3dd0-497b-aa31-f0d5a0d5b78e", 00:09:06.745 "strip_size_kb": 64, 00:09:06.745 "state": "offline", 00:09:06.745 "raid_level": "concat", 00:09:06.745 "superblock": false, 00:09:06.745 "num_base_bdevs": 2, 00:09:06.745 "num_base_bdevs_discovered": 1, 00:09:06.745 "num_base_bdevs_operational": 1, 00:09:06.745 "base_bdevs_list": [ 00:09:06.745 { 00:09:06.745 "name": null, 00:09:06.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.745 "is_configured": false, 00:09:06.745 "data_offset": 0, 00:09:06.745 "data_size": 65536 00:09:06.745 }, 00:09:06.745 { 00:09:06.745 "name": "BaseBdev2", 00:09:06.745 "uuid": "07282919-37b4-4897-ba54-1ab1dd52ca1f", 00:09:06.745 "is_configured": true, 00:09:06.745 "data_offset": 0, 00:09:06.745 "data_size": 65536 00:09:06.745 } 00:09:06.745 ] 00:09:06.745 }' 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.745 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.005 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.265 [2024-10-11 09:42:51.649586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.265 [2024-10-11 09:42:51.649712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62120 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62120 ']' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62120 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62120 00:09:07.265 killing process with pid 62120 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62120' 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62120 00:09:07.265 [2024-10-11 09:42:51.834253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.265 09:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62120 00:09:07.265 [2024-10-11 09:42:51.852229] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.647 09:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:08.647 00:09:08.647 real 0m5.230s 00:09:08.647 user 0m7.623s 00:09:08.647 sys 0m0.827s 00:09:08.647 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.647 09:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.647 ************************************ 00:09:08.647 END TEST raid_state_function_test 00:09:08.647 ************************************ 00:09:08.647 09:42:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:08.647 09:42:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.647 09:42:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.647 09:42:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.647 ************************************ 00:09:08.647 START TEST raid_state_function_test_sb 00:09:08.647 ************************************ 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.647 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62373 00:09:08.648 Process raid pid: 62373 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62373' 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62373 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62373 ']' 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.648 09:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.648 [2024-10-11 09:42:53.120882] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:08.648 [2024-10-11 09:42:53.121108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.648 [2024-10-11 09:42:53.274258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.907 [2024-10-11 09:42:53.401237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.167 [2024-10-11 09:42:53.638670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.167 [2024-10-11 09:42:53.638715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.425 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.426 [2024-10-11 09:42:54.014285] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.426 [2024-10-11 09:42:54.014350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.426 [2024-10-11 09:42:54.014362] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.426 [2024-10-11 09:42:54.014372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.426 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.684 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.684 "name": "Existed_Raid", 00:09:09.684 "uuid": "b15f3c4d-3dcc-4273-91d3-f9a447a42c5c", 00:09:09.684 "strip_size_kb": 64, 00:09:09.684 "state": "configuring", 00:09:09.684 "raid_level": "concat", 00:09:09.684 "superblock": true, 00:09:09.684 "num_base_bdevs": 2, 00:09:09.684 "num_base_bdevs_discovered": 0, 00:09:09.684 "num_base_bdevs_operational": 2, 00:09:09.684 "base_bdevs_list": [ 00:09:09.684 { 00:09:09.684 "name": "BaseBdev1", 00:09:09.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.684 "is_configured": false, 00:09:09.684 "data_offset": 0, 00:09:09.684 "data_size": 0 00:09:09.684 }, 00:09:09.684 { 00:09:09.684 "name": "BaseBdev2", 00:09:09.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.684 "is_configured": false, 00:09:09.684 "data_offset": 0, 00:09:09.684 "data_size": 0 00:09:09.684 } 00:09:09.684 ] 00:09:09.684 }' 00:09:09.684 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.684 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.944 [2024-10-11 09:42:54.485379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.944 [2024-10-11 09:42:54.485486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.944 [2024-10-11 09:42:54.497432] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.944 [2024-10-11 09:42:54.497538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.944 [2024-10-11 09:42:54.497571] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.944 [2024-10-11 09:42:54.497609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.944 [2024-10-11 09:42:54.549624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.944 BaseBdev1 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.944 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.204 [ 00:09:10.204 { 00:09:10.204 "name": "BaseBdev1", 00:09:10.204 "aliases": [ 00:09:10.204 "728f37ae-3a2f-4ca0-b860-719e876b918b" 00:09:10.204 ], 00:09:10.204 "product_name": "Malloc disk", 00:09:10.204 "block_size": 512, 00:09:10.204 "num_blocks": 65536, 00:09:10.204 "uuid": "728f37ae-3a2f-4ca0-b860-719e876b918b", 00:09:10.204 "assigned_rate_limits": { 00:09:10.204 "rw_ios_per_sec": 0, 00:09:10.204 "rw_mbytes_per_sec": 0, 00:09:10.204 "r_mbytes_per_sec": 0, 00:09:10.204 "w_mbytes_per_sec": 0 00:09:10.204 }, 00:09:10.204 "claimed": true, 00:09:10.204 "claim_type": "exclusive_write", 00:09:10.204 "zoned": false, 00:09:10.204 "supported_io_types": { 00:09:10.204 "read": true, 00:09:10.204 "write": true, 00:09:10.204 "unmap": true, 00:09:10.204 "flush": true, 00:09:10.204 "reset": true, 00:09:10.204 "nvme_admin": false, 00:09:10.204 "nvme_io": false, 00:09:10.204 "nvme_io_md": false, 00:09:10.204 "write_zeroes": true, 00:09:10.204 "zcopy": true, 00:09:10.204 "get_zone_info": false, 00:09:10.204 "zone_management": false, 00:09:10.204 "zone_append": false, 00:09:10.204 "compare": false, 00:09:10.204 "compare_and_write": false, 00:09:10.204 "abort": true, 00:09:10.204 "seek_hole": false, 00:09:10.204 "seek_data": false, 00:09:10.204 "copy": true, 00:09:10.204 "nvme_iov_md": false 00:09:10.204 }, 00:09:10.204 "memory_domains": [ 00:09:10.204 { 00:09:10.204 "dma_device_id": "system", 00:09:10.204 "dma_device_type": 1 00:09:10.204 }, 00:09:10.204 { 00:09:10.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.204 "dma_device_type": 2 00:09:10.204 } 00:09:10.204 ], 00:09:10.204 "driver_specific": {} 00:09:10.204 } 00:09:10.204 ] 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.204 "name": "Existed_Raid", 00:09:10.204 "uuid": "e67ca882-2df5-4e52-91fc-4abc371ed56b", 00:09:10.204 "strip_size_kb": 64, 00:09:10.204 "state": "configuring", 00:09:10.204 "raid_level": "concat", 00:09:10.204 "superblock": true, 00:09:10.204 "num_base_bdevs": 2, 00:09:10.204 "num_base_bdevs_discovered": 1, 00:09:10.204 "num_base_bdevs_operational": 2, 00:09:10.204 "base_bdevs_list": [ 00:09:10.204 { 00:09:10.204 "name": "BaseBdev1", 00:09:10.204 "uuid": "728f37ae-3a2f-4ca0-b860-719e876b918b", 00:09:10.204 "is_configured": true, 00:09:10.204 "data_offset": 2048, 00:09:10.204 "data_size": 63488 00:09:10.204 }, 00:09:10.204 { 00:09:10.204 "name": "BaseBdev2", 00:09:10.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.204 "is_configured": false, 00:09:10.204 "data_offset": 0, 00:09:10.204 "data_size": 0 00:09:10.204 } 00:09:10.204 ] 00:09:10.204 }' 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.204 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.464 09:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.464 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.464 09:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.464 [2024-10-11 09:42:55.004912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.464 [2024-10-11 09:42:55.004974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:10.464 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.464 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:10.464 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.465 [2024-10-11 09:42:55.016950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.465 [2024-10-11 09:42:55.018922] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.465 [2024-10-11 09:42:55.019000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.465 "name": "Existed_Raid", 00:09:10.465 "uuid": "1f9ba926-378a-47e9-8ff2-7ab7b3a4630a", 00:09:10.465 "strip_size_kb": 64, 00:09:10.465 "state": "configuring", 00:09:10.465 "raid_level": "concat", 00:09:10.465 "superblock": true, 00:09:10.465 "num_base_bdevs": 2, 00:09:10.465 "num_base_bdevs_discovered": 1, 00:09:10.465 "num_base_bdevs_operational": 2, 00:09:10.465 "base_bdevs_list": [ 00:09:10.465 { 00:09:10.465 "name": "BaseBdev1", 00:09:10.465 "uuid": "728f37ae-3a2f-4ca0-b860-719e876b918b", 00:09:10.465 "is_configured": true, 00:09:10.465 "data_offset": 2048, 00:09:10.465 "data_size": 63488 00:09:10.465 }, 00:09:10.465 { 00:09:10.465 "name": "BaseBdev2", 00:09:10.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.465 "is_configured": false, 00:09:10.465 "data_offset": 0, 00:09:10.465 "data_size": 0 00:09:10.465 } 00:09:10.465 ] 00:09:10.465 }' 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.465 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.055 [2024-10-11 09:42:55.494874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.055 [2024-10-11 09:42:55.495267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.055 [2024-10-11 09:42:55.495290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:11.055 [2024-10-11 09:42:55.495605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:11.055 BaseBdev2 00:09:11.055 [2024-10-11 09:42:55.495798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.055 [2024-10-11 09:42:55.495819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.055 [2024-10-11 09:42:55.495994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.055 [ 00:09:11.055 { 00:09:11.055 "name": "BaseBdev2", 00:09:11.055 "aliases": [ 00:09:11.055 "02966c62-8de7-4df7-8261-2fdd0f82329b" 00:09:11.055 ], 00:09:11.055 "product_name": "Malloc disk", 00:09:11.055 "block_size": 512, 00:09:11.055 "num_blocks": 65536, 00:09:11.055 "uuid": "02966c62-8de7-4df7-8261-2fdd0f82329b", 00:09:11.055 "assigned_rate_limits": { 00:09:11.055 "rw_ios_per_sec": 0, 00:09:11.055 "rw_mbytes_per_sec": 0, 00:09:11.055 "r_mbytes_per_sec": 0, 00:09:11.055 "w_mbytes_per_sec": 0 00:09:11.055 }, 00:09:11.055 "claimed": true, 00:09:11.055 "claim_type": "exclusive_write", 00:09:11.055 "zoned": false, 00:09:11.055 "supported_io_types": { 00:09:11.055 "read": true, 00:09:11.055 "write": true, 00:09:11.055 "unmap": true, 00:09:11.055 "flush": true, 00:09:11.055 "reset": true, 00:09:11.055 "nvme_admin": false, 00:09:11.055 "nvme_io": false, 00:09:11.055 "nvme_io_md": false, 00:09:11.055 "write_zeroes": true, 00:09:11.055 "zcopy": true, 00:09:11.055 "get_zone_info": false, 00:09:11.055 "zone_management": false, 00:09:11.055 "zone_append": false, 00:09:11.055 "compare": false, 00:09:11.055 "compare_and_write": false, 00:09:11.055 "abort": true, 00:09:11.055 "seek_hole": false, 00:09:11.055 "seek_data": false, 00:09:11.055 "copy": true, 00:09:11.055 "nvme_iov_md": false 00:09:11.055 }, 00:09:11.055 "memory_domains": [ 00:09:11.055 { 00:09:11.055 "dma_device_id": "system", 00:09:11.055 "dma_device_type": 1 00:09:11.055 }, 00:09:11.055 { 00:09:11.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.055 "dma_device_type": 2 00:09:11.055 } 00:09:11.055 ], 00:09:11.055 "driver_specific": {} 00:09:11.055 } 00:09:11.055 ] 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.055 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.056 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.056 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.056 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.056 "name": "Existed_Raid", 00:09:11.056 "uuid": "1f9ba926-378a-47e9-8ff2-7ab7b3a4630a", 00:09:11.056 "strip_size_kb": 64, 00:09:11.056 "state": "online", 00:09:11.056 "raid_level": "concat", 00:09:11.056 "superblock": true, 00:09:11.056 "num_base_bdevs": 2, 00:09:11.056 "num_base_bdevs_discovered": 2, 00:09:11.056 "num_base_bdevs_operational": 2, 00:09:11.056 "base_bdevs_list": [ 00:09:11.056 { 00:09:11.056 "name": "BaseBdev1", 00:09:11.056 "uuid": "728f37ae-3a2f-4ca0-b860-719e876b918b", 00:09:11.056 "is_configured": true, 00:09:11.056 "data_offset": 2048, 00:09:11.056 "data_size": 63488 00:09:11.056 }, 00:09:11.056 { 00:09:11.056 "name": "BaseBdev2", 00:09:11.056 "uuid": "02966c62-8de7-4df7-8261-2fdd0f82329b", 00:09:11.056 "is_configured": true, 00:09:11.056 "data_offset": 2048, 00:09:11.056 "data_size": 63488 00:09:11.056 } 00:09:11.056 ] 00:09:11.056 }' 00:09:11.056 09:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.056 09:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.637 [2024-10-11 09:42:56.022431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.637 "name": "Existed_Raid", 00:09:11.637 "aliases": [ 00:09:11.637 "1f9ba926-378a-47e9-8ff2-7ab7b3a4630a" 00:09:11.637 ], 00:09:11.637 "product_name": "Raid Volume", 00:09:11.637 "block_size": 512, 00:09:11.637 "num_blocks": 126976, 00:09:11.637 "uuid": "1f9ba926-378a-47e9-8ff2-7ab7b3a4630a", 00:09:11.637 "assigned_rate_limits": { 00:09:11.637 "rw_ios_per_sec": 0, 00:09:11.637 "rw_mbytes_per_sec": 0, 00:09:11.637 "r_mbytes_per_sec": 0, 00:09:11.637 "w_mbytes_per_sec": 0 00:09:11.637 }, 00:09:11.637 "claimed": false, 00:09:11.637 "zoned": false, 00:09:11.637 "supported_io_types": { 00:09:11.637 "read": true, 00:09:11.637 "write": true, 00:09:11.637 "unmap": true, 00:09:11.637 "flush": true, 00:09:11.637 "reset": true, 00:09:11.637 "nvme_admin": false, 00:09:11.637 "nvme_io": false, 00:09:11.637 "nvme_io_md": false, 00:09:11.637 "write_zeroes": true, 00:09:11.637 "zcopy": false, 00:09:11.637 "get_zone_info": false, 00:09:11.637 "zone_management": false, 00:09:11.637 "zone_append": false, 00:09:11.637 "compare": false, 00:09:11.637 "compare_and_write": false, 00:09:11.637 "abort": false, 00:09:11.637 "seek_hole": false, 00:09:11.637 "seek_data": false, 00:09:11.637 "copy": false, 00:09:11.637 "nvme_iov_md": false 00:09:11.637 }, 00:09:11.637 "memory_domains": [ 00:09:11.637 { 00:09:11.637 "dma_device_id": "system", 00:09:11.637 "dma_device_type": 1 00:09:11.637 }, 00:09:11.637 { 00:09:11.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.637 "dma_device_type": 2 00:09:11.637 }, 00:09:11.637 { 00:09:11.637 "dma_device_id": "system", 00:09:11.637 "dma_device_type": 1 00:09:11.637 }, 00:09:11.637 { 00:09:11.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.637 "dma_device_type": 2 00:09:11.637 } 00:09:11.637 ], 00:09:11.637 "driver_specific": { 00:09:11.637 "raid": { 00:09:11.637 "uuid": "1f9ba926-378a-47e9-8ff2-7ab7b3a4630a", 00:09:11.637 "strip_size_kb": 64, 00:09:11.637 "state": "online", 00:09:11.637 "raid_level": "concat", 00:09:11.637 "superblock": true, 00:09:11.637 "num_base_bdevs": 2, 00:09:11.637 "num_base_bdevs_discovered": 2, 00:09:11.637 "num_base_bdevs_operational": 2, 00:09:11.637 "base_bdevs_list": [ 00:09:11.637 { 00:09:11.637 "name": "BaseBdev1", 00:09:11.637 "uuid": "728f37ae-3a2f-4ca0-b860-719e876b918b", 00:09:11.637 "is_configured": true, 00:09:11.637 "data_offset": 2048, 00:09:11.637 "data_size": 63488 00:09:11.637 }, 00:09:11.637 { 00:09:11.637 "name": "BaseBdev2", 00:09:11.637 "uuid": "02966c62-8de7-4df7-8261-2fdd0f82329b", 00:09:11.637 "is_configured": true, 00:09:11.637 "data_offset": 2048, 00:09:11.637 "data_size": 63488 00:09:11.637 } 00:09:11.637 ] 00:09:11.637 } 00:09:11.637 } 00:09:11.637 }' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.637 BaseBdev2' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.637 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.638 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.638 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.638 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.638 [2024-10-11 09:42:56.249770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.638 [2024-10-11 09:42:56.249805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.638 [2024-10-11 09:42:56.249861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.897 "name": "Existed_Raid", 00:09:11.897 "uuid": "1f9ba926-378a-47e9-8ff2-7ab7b3a4630a", 00:09:11.897 "strip_size_kb": 64, 00:09:11.897 "state": "offline", 00:09:11.897 "raid_level": "concat", 00:09:11.897 "superblock": true, 00:09:11.897 "num_base_bdevs": 2, 00:09:11.897 "num_base_bdevs_discovered": 1, 00:09:11.897 "num_base_bdevs_operational": 1, 00:09:11.897 "base_bdevs_list": [ 00:09:11.897 { 00:09:11.897 "name": null, 00:09:11.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.897 "is_configured": false, 00:09:11.897 "data_offset": 0, 00:09:11.897 "data_size": 63488 00:09:11.897 }, 00:09:11.897 { 00:09:11.897 "name": "BaseBdev2", 00:09:11.897 "uuid": "02966c62-8de7-4df7-8261-2fdd0f82329b", 00:09:11.897 "is_configured": true, 00:09:11.897 "data_offset": 2048, 00:09:11.897 "data_size": 63488 00:09:11.897 } 00:09:11.897 ] 00:09:11.897 }' 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.897 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.156 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 [2024-10-11 09:42:56.797954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.415 [2024-10-11 09:42:56.798073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62373 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62373 ']' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62373 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62373 00:09:12.415 killing process with pid 62373 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62373' 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62373 00:09:12.415 [2024-10-11 09:42:56.980913] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.415 09:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62373 00:09:12.415 [2024-10-11 09:42:56.997492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.795 ************************************ 00:09:13.795 END TEST raid_state_function_test_sb 00:09:13.795 ************************************ 00:09:13.795 09:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.795 00:09:13.795 real 0m5.123s 00:09:13.795 user 0m7.372s 00:09:13.795 sys 0m0.834s 00:09:13.795 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.795 09:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.795 09:42:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:13.795 09:42:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:13.795 09:42:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.795 09:42:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.795 ************************************ 00:09:13.795 START TEST raid_superblock_test 00:09:13.795 ************************************ 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62631 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62631 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62631 ']' 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.795 09:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.795 [2024-10-11 09:42:58.308642] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:13.795 [2024-10-11 09:42:58.308891] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62631 ] 00:09:14.055 [2024-10-11 09:42:58.490150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.055 [2024-10-11 09:42:58.614236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.346 [2024-10-11 09:42:58.841902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.346 [2024-10-11 09:42:58.841955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 malloc1 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 [2024-10-11 09:42:59.351842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:14.920 [2024-10-11 09:42:59.351961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.920 [2024-10-11 09:42:59.352010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:14.920 [2024-10-11 09:42:59.352021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.920 [2024-10-11 09:42:59.354375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.920 [2024-10-11 09:42:59.354471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:14.920 pt1 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 malloc2 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 [2024-10-11 09:42:59.411448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.920 [2024-10-11 09:42:59.411548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.920 [2024-10-11 09:42:59.411589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:14.920 [2024-10-11 09:42:59.411622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.920 [2024-10-11 09:42:59.413723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.920 [2024-10-11 09:42:59.413830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.920 pt2 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.920 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 [2024-10-11 09:42:59.423494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:14.920 [2024-10-11 09:42:59.425467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.920 [2024-10-11 09:42:59.425706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:14.920 [2024-10-11 09:42:59.425743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.920 [2024-10-11 09:42:59.426018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:14.921 [2024-10-11 09:42:59.426186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:14.921 [2024-10-11 09:42:59.426198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:14.921 [2024-10-11 09:42:59.426344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.921 "name": "raid_bdev1", 00:09:14.921 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:14.921 "strip_size_kb": 64, 00:09:14.921 "state": "online", 00:09:14.921 "raid_level": "concat", 00:09:14.921 "superblock": true, 00:09:14.921 "num_base_bdevs": 2, 00:09:14.921 "num_base_bdevs_discovered": 2, 00:09:14.921 "num_base_bdevs_operational": 2, 00:09:14.921 "base_bdevs_list": [ 00:09:14.921 { 00:09:14.921 "name": "pt1", 00:09:14.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.921 "is_configured": true, 00:09:14.921 "data_offset": 2048, 00:09:14.921 "data_size": 63488 00:09:14.921 }, 00:09:14.921 { 00:09:14.921 "name": "pt2", 00:09:14.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.921 "is_configured": true, 00:09:14.921 "data_offset": 2048, 00:09:14.921 "data_size": 63488 00:09:14.921 } 00:09:14.921 ] 00:09:14.921 }' 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.921 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.490 [2024-10-11 09:42:59.918974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.490 "name": "raid_bdev1", 00:09:15.490 "aliases": [ 00:09:15.490 "32ac20d0-6f47-454b-a79e-f40896d080a1" 00:09:15.490 ], 00:09:15.490 "product_name": "Raid Volume", 00:09:15.490 "block_size": 512, 00:09:15.490 "num_blocks": 126976, 00:09:15.490 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:15.490 "assigned_rate_limits": { 00:09:15.490 "rw_ios_per_sec": 0, 00:09:15.490 "rw_mbytes_per_sec": 0, 00:09:15.490 "r_mbytes_per_sec": 0, 00:09:15.490 "w_mbytes_per_sec": 0 00:09:15.490 }, 00:09:15.490 "claimed": false, 00:09:15.490 "zoned": false, 00:09:15.490 "supported_io_types": { 00:09:15.490 "read": true, 00:09:15.490 "write": true, 00:09:15.490 "unmap": true, 00:09:15.490 "flush": true, 00:09:15.490 "reset": true, 00:09:15.490 "nvme_admin": false, 00:09:15.490 "nvme_io": false, 00:09:15.490 "nvme_io_md": false, 00:09:15.490 "write_zeroes": true, 00:09:15.490 "zcopy": false, 00:09:15.490 "get_zone_info": false, 00:09:15.490 "zone_management": false, 00:09:15.490 "zone_append": false, 00:09:15.490 "compare": false, 00:09:15.490 "compare_and_write": false, 00:09:15.490 "abort": false, 00:09:15.490 "seek_hole": false, 00:09:15.490 "seek_data": false, 00:09:15.490 "copy": false, 00:09:15.490 "nvme_iov_md": false 00:09:15.490 }, 00:09:15.490 "memory_domains": [ 00:09:15.490 { 00:09:15.490 "dma_device_id": "system", 00:09:15.490 "dma_device_type": 1 00:09:15.490 }, 00:09:15.490 { 00:09:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.490 "dma_device_type": 2 00:09:15.490 }, 00:09:15.490 { 00:09:15.490 "dma_device_id": "system", 00:09:15.490 "dma_device_type": 1 00:09:15.490 }, 00:09:15.490 { 00:09:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.490 "dma_device_type": 2 00:09:15.490 } 00:09:15.490 ], 00:09:15.490 "driver_specific": { 00:09:15.490 "raid": { 00:09:15.490 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:15.490 "strip_size_kb": 64, 00:09:15.490 "state": "online", 00:09:15.490 "raid_level": "concat", 00:09:15.490 "superblock": true, 00:09:15.490 "num_base_bdevs": 2, 00:09:15.490 "num_base_bdevs_discovered": 2, 00:09:15.490 "num_base_bdevs_operational": 2, 00:09:15.490 "base_bdevs_list": [ 00:09:15.490 { 00:09:15.490 "name": "pt1", 00:09:15.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.490 "is_configured": true, 00:09:15.490 "data_offset": 2048, 00:09:15.490 "data_size": 63488 00:09:15.490 }, 00:09:15.490 { 00:09:15.490 "name": "pt2", 00:09:15.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.490 "is_configured": true, 00:09:15.490 "data_offset": 2048, 00:09:15.490 "data_size": 63488 00:09:15.490 } 00:09:15.490 ] 00:09:15.490 } 00:09:15.490 } 00:09:15.490 }' 00:09:15.490 09:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:15.490 pt2' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.490 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 [2024-10-11 09:43:00.158478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32ac20d0-6f47-454b-a79e-f40896d080a1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 32ac20d0-6f47-454b-a79e-f40896d080a1 ']' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 [2024-10-11 09:43:00.206144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.749 [2024-10-11 09:43:00.206217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.749 [2024-10-11 09:43:00.206353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.749 [2024-10-11 09:43:00.206443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.749 [2024-10-11 09:43:00.206508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.749 [2024-10-11 09:43:00.345955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:15.749 [2024-10-11 09:43:00.347855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:15.749 [2024-10-11 09:43:00.347927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:15.749 [2024-10-11 09:43:00.347991] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:15.749 [2024-10-11 09:43:00.348013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.749 [2024-10-11 09:43:00.348025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:15.749 request: 00:09:15.749 { 00:09:15.749 "name": "raid_bdev1", 00:09:15.749 "raid_level": "concat", 00:09:15.749 "base_bdevs": [ 00:09:15.749 "malloc1", 00:09:15.749 "malloc2" 00:09:15.749 ], 00:09:15.749 "strip_size_kb": 64, 00:09:15.749 "superblock": false, 00:09:15.749 "method": "bdev_raid_create", 00:09:15.749 "req_id": 1 00:09:15.749 } 00:09:15.749 Got JSON-RPC error response 00:09:15.749 response: 00:09:15.749 { 00:09:15.749 "code": -17, 00:09:15.749 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:15.749 } 00:09:15.749 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.750 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.008 [2024-10-11 09:43:00.401861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.008 [2024-10-11 09:43:00.401930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.008 [2024-10-11 09:43:00.401954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:16.008 [2024-10-11 09:43:00.401967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.008 [2024-10-11 09:43:00.404506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.008 [2024-10-11 09:43:00.404549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.008 [2024-10-11 09:43:00.404647] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:16.008 [2024-10-11 09:43:00.404720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.008 pt1 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.008 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.009 "name": "raid_bdev1", 00:09:16.009 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:16.009 "strip_size_kb": 64, 00:09:16.009 "state": "configuring", 00:09:16.009 "raid_level": "concat", 00:09:16.009 "superblock": true, 00:09:16.009 "num_base_bdevs": 2, 00:09:16.009 "num_base_bdevs_discovered": 1, 00:09:16.009 "num_base_bdevs_operational": 2, 00:09:16.009 "base_bdevs_list": [ 00:09:16.009 { 00:09:16.009 "name": "pt1", 00:09:16.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.009 "is_configured": true, 00:09:16.009 "data_offset": 2048, 00:09:16.009 "data_size": 63488 00:09:16.009 }, 00:09:16.009 { 00:09:16.009 "name": null, 00:09:16.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.009 "is_configured": false, 00:09:16.009 "data_offset": 2048, 00:09:16.009 "data_size": 63488 00:09:16.009 } 00:09:16.009 ] 00:09:16.009 }' 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.009 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.268 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.269 [2024-10-11 09:43:00.881031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.269 [2024-10-11 09:43:00.881119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.269 [2024-10-11 09:43:00.881145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:16.269 [2024-10-11 09:43:00.881156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.269 [2024-10-11 09:43:00.881658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.269 [2024-10-11 09:43:00.881677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.269 [2024-10-11 09:43:00.881796] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:16.269 [2024-10-11 09:43:00.881825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.269 [2024-10-11 09:43:00.881958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:16.269 [2024-10-11 09:43:00.881969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:16.269 [2024-10-11 09:43:00.882221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.269 [2024-10-11 09:43:00.882403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:16.269 [2024-10-11 09:43:00.882423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:16.269 [2024-10-11 09:43:00.882585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.269 pt2 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.269 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.529 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.529 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.529 "name": "raid_bdev1", 00:09:16.529 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:16.529 "strip_size_kb": 64, 00:09:16.529 "state": "online", 00:09:16.529 "raid_level": "concat", 00:09:16.529 "superblock": true, 00:09:16.529 "num_base_bdevs": 2, 00:09:16.529 "num_base_bdevs_discovered": 2, 00:09:16.529 "num_base_bdevs_operational": 2, 00:09:16.529 "base_bdevs_list": [ 00:09:16.529 { 00:09:16.529 "name": "pt1", 00:09:16.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.529 "is_configured": true, 00:09:16.529 "data_offset": 2048, 00:09:16.529 "data_size": 63488 00:09:16.529 }, 00:09:16.529 { 00:09:16.529 "name": "pt2", 00:09:16.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.529 "is_configured": true, 00:09:16.529 "data_offset": 2048, 00:09:16.529 "data_size": 63488 00:09:16.529 } 00:09:16.529 ] 00:09:16.529 }' 00:09:16.529 09:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.529 09:43:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.789 [2024-10-11 09:43:01.376489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.789 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.789 "name": "raid_bdev1", 00:09:16.789 "aliases": [ 00:09:16.789 "32ac20d0-6f47-454b-a79e-f40896d080a1" 00:09:16.789 ], 00:09:16.789 "product_name": "Raid Volume", 00:09:16.789 "block_size": 512, 00:09:16.789 "num_blocks": 126976, 00:09:16.789 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:16.789 "assigned_rate_limits": { 00:09:16.789 "rw_ios_per_sec": 0, 00:09:16.789 "rw_mbytes_per_sec": 0, 00:09:16.789 "r_mbytes_per_sec": 0, 00:09:16.789 "w_mbytes_per_sec": 0 00:09:16.789 }, 00:09:16.789 "claimed": false, 00:09:16.789 "zoned": false, 00:09:16.789 "supported_io_types": { 00:09:16.789 "read": true, 00:09:16.789 "write": true, 00:09:16.789 "unmap": true, 00:09:16.789 "flush": true, 00:09:16.789 "reset": true, 00:09:16.789 "nvme_admin": false, 00:09:16.789 "nvme_io": false, 00:09:16.789 "nvme_io_md": false, 00:09:16.789 "write_zeroes": true, 00:09:16.789 "zcopy": false, 00:09:16.789 "get_zone_info": false, 00:09:16.789 "zone_management": false, 00:09:16.789 "zone_append": false, 00:09:16.789 "compare": false, 00:09:16.789 "compare_and_write": false, 00:09:16.789 "abort": false, 00:09:16.789 "seek_hole": false, 00:09:16.789 "seek_data": false, 00:09:16.789 "copy": false, 00:09:16.789 "nvme_iov_md": false 00:09:16.789 }, 00:09:16.789 "memory_domains": [ 00:09:16.789 { 00:09:16.789 "dma_device_id": "system", 00:09:16.789 "dma_device_type": 1 00:09:16.789 }, 00:09:16.789 { 00:09:16.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.789 "dma_device_type": 2 00:09:16.789 }, 00:09:16.789 { 00:09:16.789 "dma_device_id": "system", 00:09:16.789 "dma_device_type": 1 00:09:16.789 }, 00:09:16.789 { 00:09:16.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.789 "dma_device_type": 2 00:09:16.789 } 00:09:16.789 ], 00:09:16.789 "driver_specific": { 00:09:16.789 "raid": { 00:09:16.789 "uuid": "32ac20d0-6f47-454b-a79e-f40896d080a1", 00:09:16.789 "strip_size_kb": 64, 00:09:16.789 "state": "online", 00:09:16.789 "raid_level": "concat", 00:09:16.789 "superblock": true, 00:09:16.789 "num_base_bdevs": 2, 00:09:16.789 "num_base_bdevs_discovered": 2, 00:09:16.790 "num_base_bdevs_operational": 2, 00:09:16.790 "base_bdevs_list": [ 00:09:16.790 { 00:09:16.790 "name": "pt1", 00:09:16.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.790 "is_configured": true, 00:09:16.790 "data_offset": 2048, 00:09:16.790 "data_size": 63488 00:09:16.790 }, 00:09:16.790 { 00:09:16.790 "name": "pt2", 00:09:16.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.790 "is_configured": true, 00:09:16.790 "data_offset": 2048, 00:09:16.790 "data_size": 63488 00:09:16.790 } 00:09:16.790 ] 00:09:16.790 } 00:09:16.790 } 00:09:16.790 }' 00:09:16.790 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.049 pt2' 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.049 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.050 [2024-10-11 09:43:01.608134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 32ac20d0-6f47-454b-a79e-f40896d080a1 '!=' 32ac20d0-6f47-454b-a79e-f40896d080a1 ']' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62631 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62631 ']' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62631 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62631 00:09:17.050 killing process with pid 62631 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62631' 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62631 00:09:17.050 [2024-10-11 09:43:01.672221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.050 [2024-10-11 09:43:01.672326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.050 [2024-10-11 09:43:01.672382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.050 [2024-10-11 09:43:01.672394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:17.050 09:43:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62631 00:09:17.310 [2024-10-11 09:43:01.887701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.690 09:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:18.690 00:09:18.690 real 0m4.781s 00:09:18.690 user 0m6.839s 00:09:18.690 sys 0m0.784s 00:09:18.690 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.690 09:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.690 ************************************ 00:09:18.690 END TEST raid_superblock_test 00:09:18.690 ************************************ 00:09:18.690 09:43:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:18.691 09:43:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:18.691 09:43:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.691 09:43:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 ************************************ 00:09:18.691 START TEST raid_read_error_test 00:09:18.691 ************************************ 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9kJ35QFHH0 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62837 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62837 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62837 ']' 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.691 09:43:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 [2024-10-11 09:43:03.169279] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:18.691 [2024-10-11 09:43:03.169417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:09:18.691 [2024-10-11 09:43:03.318396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.950 [2024-10-11 09:43:03.443444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.209 [2024-10-11 09:43:03.673942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.209 [2024-10-11 09:43:03.674011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.469 BaseBdev1_malloc 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.469 true 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.469 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.469 [2024-10-11 09:43:04.095196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:19.469 [2024-10-11 09:43:04.095251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.469 [2024-10-11 09:43:04.095271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:19.469 [2024-10-11 09:43:04.095282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.469 [2024-10-11 09:43:04.097551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.469 [2024-10-11 09:43:04.097591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:19.469 BaseBdev1 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.729 BaseBdev2_malloc 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.729 true 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:19.729 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.730 [2024-10-11 09:43:04.165171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:19.730 [2024-10-11 09:43:04.165224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.730 [2024-10-11 09:43:04.165240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:19.730 [2024-10-11 09:43:04.165251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.730 [2024-10-11 09:43:04.167469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.730 [2024-10-11 09:43:04.167509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:19.730 BaseBdev2 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.730 [2024-10-11 09:43:04.177207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.730 [2024-10-11 09:43:04.179185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.730 [2024-10-11 09:43:04.179384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.730 [2024-10-11 09:43:04.179405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:19.730 [2024-10-11 09:43:04.179680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:19.730 [2024-10-11 09:43:04.179889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.730 [2024-10-11 09:43:04.179906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:19.730 [2024-10-11 09:43:04.180062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.730 "name": "raid_bdev1", 00:09:19.730 "uuid": "319c581b-e122-4cd0-8ba3-0a4979541d96", 00:09:19.730 "strip_size_kb": 64, 00:09:19.730 "state": "online", 00:09:19.730 "raid_level": "concat", 00:09:19.730 "superblock": true, 00:09:19.730 "num_base_bdevs": 2, 00:09:19.730 "num_base_bdevs_discovered": 2, 00:09:19.730 "num_base_bdevs_operational": 2, 00:09:19.730 "base_bdevs_list": [ 00:09:19.730 { 00:09:19.730 "name": "BaseBdev1", 00:09:19.730 "uuid": "41a152e1-dd9a-5d71-9c5e-32507604e14f", 00:09:19.730 "is_configured": true, 00:09:19.730 "data_offset": 2048, 00:09:19.730 "data_size": 63488 00:09:19.730 }, 00:09:19.730 { 00:09:19.730 "name": "BaseBdev2", 00:09:19.730 "uuid": "a5dc89cb-7c07-5b15-a0d1-85b7b67c2802", 00:09:19.730 "is_configured": true, 00:09:19.730 "data_offset": 2048, 00:09:19.730 "data_size": 63488 00:09:19.730 } 00:09:19.730 ] 00:09:19.730 }' 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.730 09:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.299 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:20.299 09:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:20.299 [2024-10-11 09:43:04.749694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.237 "name": "raid_bdev1", 00:09:21.237 "uuid": "319c581b-e122-4cd0-8ba3-0a4979541d96", 00:09:21.237 "strip_size_kb": 64, 00:09:21.237 "state": "online", 00:09:21.237 "raid_level": "concat", 00:09:21.237 "superblock": true, 00:09:21.237 "num_base_bdevs": 2, 00:09:21.237 "num_base_bdevs_discovered": 2, 00:09:21.237 "num_base_bdevs_operational": 2, 00:09:21.237 "base_bdevs_list": [ 00:09:21.237 { 00:09:21.237 "name": "BaseBdev1", 00:09:21.237 "uuid": "41a152e1-dd9a-5d71-9c5e-32507604e14f", 00:09:21.237 "is_configured": true, 00:09:21.237 "data_offset": 2048, 00:09:21.237 "data_size": 63488 00:09:21.237 }, 00:09:21.237 { 00:09:21.237 "name": "BaseBdev2", 00:09:21.237 "uuid": "a5dc89cb-7c07-5b15-a0d1-85b7b67c2802", 00:09:21.237 "is_configured": true, 00:09:21.237 "data_offset": 2048, 00:09:21.237 "data_size": 63488 00:09:21.237 } 00:09:21.237 ] 00:09:21.237 }' 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.237 09:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.496 [2024-10-11 09:43:06.081494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.496 [2024-10-11 09:43:06.081536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.496 [2024-10-11 09:43:06.084402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.496 [2024-10-11 09:43:06.084454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.496 [2024-10-11 09:43:06.084492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.496 [2024-10-11 09:43:06.084513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:21.496 { 00:09:21.496 "results": [ 00:09:21.496 { 00:09:21.496 "job": "raid_bdev1", 00:09:21.496 "core_mask": "0x1", 00:09:21.496 "workload": "randrw", 00:09:21.496 "percentage": 50, 00:09:21.496 "status": "finished", 00:09:21.496 "queue_depth": 1, 00:09:21.496 "io_size": 131072, 00:09:21.496 "runtime": 1.332618, 00:09:21.496 "iops": 15236.924610053293, 00:09:21.496 "mibps": 1904.6155762566616, 00:09:21.496 "io_failed": 1, 00:09:21.496 "io_timeout": 0, 00:09:21.496 "avg_latency_us": 91.01077703279562, 00:09:21.496 "min_latency_us": 26.606113537117903, 00:09:21.496 "max_latency_us": 1452.380786026201 00:09:21.496 } 00:09:21.496 ], 00:09:21.496 "core_count": 1 00:09:21.496 } 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62837 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62837 ']' 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62837 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.496 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62837 00:09:21.755 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.755 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.755 killing process with pid 62837 00:09:21.755 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62837' 00:09:21.755 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62837 00:09:21.755 [2024-10-11 09:43:06.132028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.755 09:43:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62837 00:09:21.755 [2024-10-11 09:43:06.270783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9kJ35QFHH0 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:23.133 00:09:23.133 real 0m4.406s 00:09:23.133 user 0m5.303s 00:09:23.133 sys 0m0.538s 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.133 09:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.133 ************************************ 00:09:23.133 END TEST raid_read_error_test 00:09:23.133 ************************************ 00:09:23.133 09:43:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:23.133 09:43:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:23.133 09:43:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.133 09:43:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.133 ************************************ 00:09:23.133 START TEST raid_write_error_test 00:09:23.133 ************************************ 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.133 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wQfBGVgmxL 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62983 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62983 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62983 ']' 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.134 09:43:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.134 [2024-10-11 09:43:07.640197] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:23.134 [2024-10-11 09:43:07.640316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62983 ] 00:09:23.394 [2024-10-11 09:43:07.805239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.394 [2024-10-11 09:43:07.933833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.653 [2024-10-11 09:43:08.168336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.653 [2024-10-11 09:43:08.168404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.913 BaseBdev1_malloc 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.913 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 true 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 [2024-10-11 09:43:08.561583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.172 [2024-10-11 09:43:08.561638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.172 [2024-10-11 09:43:08.561658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.172 [2024-10-11 09:43:08.561669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.172 [2024-10-11 09:43:08.564095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.172 [2024-10-11 09:43:08.564137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.172 BaseBdev1 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 BaseBdev2_malloc 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 true 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.172 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 [2024-10-11 09:43:08.634911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.172 [2024-10-11 09:43:08.634991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.172 [2024-10-11 09:43:08.635011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.173 [2024-10-11 09:43:08.635024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.173 [2024-10-11 09:43:08.637441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.173 [2024-10-11 09:43:08.637489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.173 BaseBdev2 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.173 [2024-10-11 09:43:08.647026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.173 [2024-10-11 09:43:08.649269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.173 [2024-10-11 09:43:08.649521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.173 [2024-10-11 09:43:08.649539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.173 [2024-10-11 09:43:08.649866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:24.173 [2024-10-11 09:43:08.650071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.173 [2024-10-11 09:43:08.650091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.173 [2024-10-11 09:43:08.650298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.173 "name": "raid_bdev1", 00:09:24.173 "uuid": "35cc709d-e6ae-486e-9482-ff92dd3ca475", 00:09:24.173 "strip_size_kb": 64, 00:09:24.173 "state": "online", 00:09:24.173 "raid_level": "concat", 00:09:24.173 "superblock": true, 00:09:24.173 "num_base_bdevs": 2, 00:09:24.173 "num_base_bdevs_discovered": 2, 00:09:24.173 "num_base_bdevs_operational": 2, 00:09:24.173 "base_bdevs_list": [ 00:09:24.173 { 00:09:24.173 "name": "BaseBdev1", 00:09:24.173 "uuid": "0dcbee38-9184-55e5-9ef8-64d9ea6cbe86", 00:09:24.173 "is_configured": true, 00:09:24.173 "data_offset": 2048, 00:09:24.173 "data_size": 63488 00:09:24.173 }, 00:09:24.173 { 00:09:24.173 "name": "BaseBdev2", 00:09:24.173 "uuid": "622be12f-5164-5c29-ace3-7fed70993199", 00:09:24.173 "is_configured": true, 00:09:24.173 "data_offset": 2048, 00:09:24.173 "data_size": 63488 00:09:24.173 } 00:09:24.173 ] 00:09:24.173 }' 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.173 09:43:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.438 09:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.438 09:43:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.708 [2024-10-11 09:43:09.163595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.646 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.646 "name": "raid_bdev1", 00:09:25.646 "uuid": "35cc709d-e6ae-486e-9482-ff92dd3ca475", 00:09:25.646 "strip_size_kb": 64, 00:09:25.646 "state": "online", 00:09:25.646 "raid_level": "concat", 00:09:25.646 "superblock": true, 00:09:25.646 "num_base_bdevs": 2, 00:09:25.646 "num_base_bdevs_discovered": 2, 00:09:25.646 "num_base_bdevs_operational": 2, 00:09:25.646 "base_bdevs_list": [ 00:09:25.646 { 00:09:25.646 "name": "BaseBdev1", 00:09:25.646 "uuid": "0dcbee38-9184-55e5-9ef8-64d9ea6cbe86", 00:09:25.646 "is_configured": true, 00:09:25.646 "data_offset": 2048, 00:09:25.646 "data_size": 63488 00:09:25.646 }, 00:09:25.646 { 00:09:25.646 "name": "BaseBdev2", 00:09:25.646 "uuid": "622be12f-5164-5c29-ace3-7fed70993199", 00:09:25.646 "is_configured": true, 00:09:25.646 "data_offset": 2048, 00:09:25.646 "data_size": 63488 00:09:25.646 } 00:09:25.646 ] 00:09:25.646 }' 00:09:25.647 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.647 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.905 [2024-10-11 09:43:10.512161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.905 [2024-10-11 09:43:10.512207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.905 [2024-10-11 09:43:10.515331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.905 [2024-10-11 09:43:10.515389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.905 [2024-10-11 09:43:10.515425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.905 [2024-10-11 09:43:10.515440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62983 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62983 ']' 00:09:25.905 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62983 00:09:25.905 { 00:09:25.905 "results": [ 00:09:25.905 { 00:09:25.905 "job": "raid_bdev1", 00:09:25.905 "core_mask": "0x1", 00:09:25.905 "workload": "randrw", 00:09:25.905 "percentage": 50, 00:09:25.905 "status": "finished", 00:09:25.905 "queue_depth": 1, 00:09:25.905 "io_size": 131072, 00:09:25.905 "runtime": 1.349138, 00:09:25.905 "iops": 15035.526387960312, 00:09:25.905 "mibps": 1879.440798495039, 00:09:25.905 "io_failed": 1, 00:09:25.906 "io_timeout": 0, 00:09:25.906 "avg_latency_us": 92.14147352251452, 00:09:25.906 "min_latency_us": 26.941484716157206, 00:09:25.906 "max_latency_us": 1709.9458515283843 00:09:25.906 } 00:09:25.906 ], 00:09:25.906 "core_count": 1 00:09:25.906 } 00:09:25.906 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:25.906 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.906 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62983 00:09:26.165 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.165 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.165 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62983' 00:09:26.165 killing process with pid 62983 00:09:26.165 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62983 00:09:26.165 [2024-10-11 09:43:10.558893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.165 09:43:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62983 00:09:26.165 [2024-10-11 09:43:10.689486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wQfBGVgmxL 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:27.545 ************************************ 00:09:27.545 END TEST raid_write_error_test 00:09:27.545 ************************************ 00:09:27.545 00:09:27.545 real 0m4.352s 00:09:27.545 user 0m5.200s 00:09:27.545 sys 0m0.561s 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.545 09:43:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.545 09:43:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:27.545 09:43:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:27.545 09:43:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:27.545 09:43:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.545 09:43:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.545 ************************************ 00:09:27.545 START TEST raid_state_function_test 00:09:27.546 ************************************ 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63126 00:09:27.546 Process raid pid: 63126 00:09:27.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63126' 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63126 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63126 ']' 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.546 09:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.546 [2024-10-11 09:43:12.046453] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:27.546 [2024-10-11 09:43:12.046698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.806 [2024-10-11 09:43:12.212497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.806 [2024-10-11 09:43:12.341199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.065 [2024-10-11 09:43:12.577543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.065 [2024-10-11 09:43:12.577679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.324 [2024-10-11 09:43:12.904943] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.324 [2024-10-11 09:43:12.905086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.324 [2024-10-11 09:43:12.905130] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.324 [2024-10-11 09:43:12.905162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.324 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.584 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.584 "name": "Existed_Raid", 00:09:28.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.584 "strip_size_kb": 0, 00:09:28.584 "state": "configuring", 00:09:28.584 "raid_level": "raid1", 00:09:28.584 "superblock": false, 00:09:28.584 "num_base_bdevs": 2, 00:09:28.584 "num_base_bdevs_discovered": 0, 00:09:28.584 "num_base_bdevs_operational": 2, 00:09:28.584 "base_bdevs_list": [ 00:09:28.584 { 00:09:28.584 "name": "BaseBdev1", 00:09:28.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.584 "is_configured": false, 00:09:28.584 "data_offset": 0, 00:09:28.584 "data_size": 0 00:09:28.584 }, 00:09:28.584 { 00:09:28.584 "name": "BaseBdev2", 00:09:28.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.584 "is_configured": false, 00:09:28.584 "data_offset": 0, 00:09:28.584 "data_size": 0 00:09:28.584 } 00:09:28.584 ] 00:09:28.584 }' 00:09:28.584 09:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.584 09:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.844 [2024-10-11 09:43:13.376093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.844 [2024-10-11 09:43:13.376200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.844 [2024-10-11 09:43:13.384100] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.844 [2024-10-11 09:43:13.384194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.844 [2024-10-11 09:43:13.384228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.844 [2024-10-11 09:43:13.384258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.844 [2024-10-11 09:43:13.433449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.844 BaseBdev1 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.844 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.844 [ 00:09:28.844 { 00:09:28.844 "name": "BaseBdev1", 00:09:28.844 "aliases": [ 00:09:28.844 "02e55c3a-45f3-4eaa-a1f5-5c750ddd99f8" 00:09:28.844 ], 00:09:28.844 "product_name": "Malloc disk", 00:09:28.844 "block_size": 512, 00:09:28.844 "num_blocks": 65536, 00:09:28.845 "uuid": "02e55c3a-45f3-4eaa-a1f5-5c750ddd99f8", 00:09:28.845 "assigned_rate_limits": { 00:09:28.845 "rw_ios_per_sec": 0, 00:09:28.845 "rw_mbytes_per_sec": 0, 00:09:28.845 "r_mbytes_per_sec": 0, 00:09:28.845 "w_mbytes_per_sec": 0 00:09:28.845 }, 00:09:28.845 "claimed": true, 00:09:28.845 "claim_type": "exclusive_write", 00:09:28.845 "zoned": false, 00:09:28.845 "supported_io_types": { 00:09:28.845 "read": true, 00:09:28.845 "write": true, 00:09:28.845 "unmap": true, 00:09:28.845 "flush": true, 00:09:28.845 "reset": true, 00:09:28.845 "nvme_admin": false, 00:09:28.845 "nvme_io": false, 00:09:28.845 "nvme_io_md": false, 00:09:28.845 "write_zeroes": true, 00:09:28.845 "zcopy": true, 00:09:28.845 "get_zone_info": false, 00:09:28.845 "zone_management": false, 00:09:28.845 "zone_append": false, 00:09:28.845 "compare": false, 00:09:28.845 "compare_and_write": false, 00:09:28.845 "abort": true, 00:09:28.845 "seek_hole": false, 00:09:28.845 "seek_data": false, 00:09:28.845 "copy": true, 00:09:28.845 "nvme_iov_md": false 00:09:28.845 }, 00:09:28.845 "memory_domains": [ 00:09:28.845 { 00:09:28.845 "dma_device_id": "system", 00:09:28.845 "dma_device_type": 1 00:09:28.845 }, 00:09:28.845 { 00:09:28.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.845 "dma_device_type": 2 00:09:28.845 } 00:09:28.845 ], 00:09:28.845 "driver_specific": {} 00:09:28.845 } 00:09:28.845 ] 00:09:28.845 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.105 "name": "Existed_Raid", 00:09:29.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.105 "strip_size_kb": 0, 00:09:29.105 "state": "configuring", 00:09:29.105 "raid_level": "raid1", 00:09:29.105 "superblock": false, 00:09:29.105 "num_base_bdevs": 2, 00:09:29.105 "num_base_bdevs_discovered": 1, 00:09:29.105 "num_base_bdevs_operational": 2, 00:09:29.105 "base_bdevs_list": [ 00:09:29.105 { 00:09:29.105 "name": "BaseBdev1", 00:09:29.105 "uuid": "02e55c3a-45f3-4eaa-a1f5-5c750ddd99f8", 00:09:29.105 "is_configured": true, 00:09:29.105 "data_offset": 0, 00:09:29.105 "data_size": 65536 00:09:29.105 }, 00:09:29.105 { 00:09:29.105 "name": "BaseBdev2", 00:09:29.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.105 "is_configured": false, 00:09:29.105 "data_offset": 0, 00:09:29.105 "data_size": 0 00:09:29.105 } 00:09:29.105 ] 00:09:29.105 }' 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.105 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.365 [2024-10-11 09:43:13.944674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.365 [2024-10-11 09:43:13.944742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.365 [2024-10-11 09:43:13.956717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.365 [2024-10-11 09:43:13.958713] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.365 [2024-10-11 09:43:13.958827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.365 09:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.625 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.625 "name": "Existed_Raid", 00:09:29.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.625 "strip_size_kb": 0, 00:09:29.625 "state": "configuring", 00:09:29.625 "raid_level": "raid1", 00:09:29.625 "superblock": false, 00:09:29.625 "num_base_bdevs": 2, 00:09:29.625 "num_base_bdevs_discovered": 1, 00:09:29.625 "num_base_bdevs_operational": 2, 00:09:29.625 "base_bdevs_list": [ 00:09:29.625 { 00:09:29.625 "name": "BaseBdev1", 00:09:29.625 "uuid": "02e55c3a-45f3-4eaa-a1f5-5c750ddd99f8", 00:09:29.625 "is_configured": true, 00:09:29.625 "data_offset": 0, 00:09:29.625 "data_size": 65536 00:09:29.625 }, 00:09:29.625 { 00:09:29.625 "name": "BaseBdev2", 00:09:29.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.625 "is_configured": false, 00:09:29.625 "data_offset": 0, 00:09:29.625 "data_size": 0 00:09:29.625 } 00:09:29.625 ] 00:09:29.625 }' 00:09:29.625 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.625 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.885 [2024-10-11 09:43:14.487788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.885 [2024-10-11 09:43:14.487853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.885 [2024-10-11 09:43:14.487862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:29.885 [2024-10-11 09:43:14.488134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:29.885 [2024-10-11 09:43:14.488336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.885 [2024-10-11 09:43:14.488352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:29.885 [2024-10-11 09:43:14.488652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.885 BaseBdev2 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.885 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.886 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.886 [ 00:09:29.886 { 00:09:29.886 "name": "BaseBdev2", 00:09:29.886 "aliases": [ 00:09:29.886 "eb1ece0c-5a05-4ee3-a3aa-e703a308200f" 00:09:29.886 ], 00:09:29.886 "product_name": "Malloc disk", 00:09:29.886 "block_size": 512, 00:09:29.886 "num_blocks": 65536, 00:09:29.886 "uuid": "eb1ece0c-5a05-4ee3-a3aa-e703a308200f", 00:09:29.886 "assigned_rate_limits": { 00:09:29.886 "rw_ios_per_sec": 0, 00:09:29.886 "rw_mbytes_per_sec": 0, 00:09:29.886 "r_mbytes_per_sec": 0, 00:09:29.886 "w_mbytes_per_sec": 0 00:09:30.146 }, 00:09:30.146 "claimed": true, 00:09:30.146 "claim_type": "exclusive_write", 00:09:30.146 "zoned": false, 00:09:30.146 "supported_io_types": { 00:09:30.146 "read": true, 00:09:30.146 "write": true, 00:09:30.146 "unmap": true, 00:09:30.146 "flush": true, 00:09:30.146 "reset": true, 00:09:30.146 "nvme_admin": false, 00:09:30.146 "nvme_io": false, 00:09:30.146 "nvme_io_md": false, 00:09:30.146 "write_zeroes": true, 00:09:30.146 "zcopy": true, 00:09:30.146 "get_zone_info": false, 00:09:30.146 "zone_management": false, 00:09:30.146 "zone_append": false, 00:09:30.146 "compare": false, 00:09:30.146 "compare_and_write": false, 00:09:30.146 "abort": true, 00:09:30.146 "seek_hole": false, 00:09:30.146 "seek_data": false, 00:09:30.146 "copy": true, 00:09:30.146 "nvme_iov_md": false 00:09:30.146 }, 00:09:30.146 "memory_domains": [ 00:09:30.146 { 00:09:30.146 "dma_device_id": "system", 00:09:30.146 "dma_device_type": 1 00:09:30.146 }, 00:09:30.146 { 00:09:30.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.146 "dma_device_type": 2 00:09:30.146 } 00:09:30.146 ], 00:09:30.146 "driver_specific": {} 00:09:30.146 } 00:09:30.146 ] 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.146 "name": "Existed_Raid", 00:09:30.146 "uuid": "84b100ef-bed2-4049-9c3d-dc65f1658794", 00:09:30.146 "strip_size_kb": 0, 00:09:30.146 "state": "online", 00:09:30.146 "raid_level": "raid1", 00:09:30.146 "superblock": false, 00:09:30.146 "num_base_bdevs": 2, 00:09:30.146 "num_base_bdevs_discovered": 2, 00:09:30.146 "num_base_bdevs_operational": 2, 00:09:30.146 "base_bdevs_list": [ 00:09:30.146 { 00:09:30.146 "name": "BaseBdev1", 00:09:30.146 "uuid": "02e55c3a-45f3-4eaa-a1f5-5c750ddd99f8", 00:09:30.146 "is_configured": true, 00:09:30.146 "data_offset": 0, 00:09:30.146 "data_size": 65536 00:09:30.146 }, 00:09:30.146 { 00:09:30.146 "name": "BaseBdev2", 00:09:30.146 "uuid": "eb1ece0c-5a05-4ee3-a3aa-e703a308200f", 00:09:30.146 "is_configured": true, 00:09:30.146 "data_offset": 0, 00:09:30.146 "data_size": 65536 00:09:30.146 } 00:09:30.146 ] 00:09:30.146 }' 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.146 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.406 [2024-10-11 09:43:14.971312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.406 09:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.406 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.406 "name": "Existed_Raid", 00:09:30.406 "aliases": [ 00:09:30.406 "84b100ef-bed2-4049-9c3d-dc65f1658794" 00:09:30.406 ], 00:09:30.406 "product_name": "Raid Volume", 00:09:30.406 "block_size": 512, 00:09:30.406 "num_blocks": 65536, 00:09:30.406 "uuid": "84b100ef-bed2-4049-9c3d-dc65f1658794", 00:09:30.406 "assigned_rate_limits": { 00:09:30.406 "rw_ios_per_sec": 0, 00:09:30.406 "rw_mbytes_per_sec": 0, 00:09:30.406 "r_mbytes_per_sec": 0, 00:09:30.406 "w_mbytes_per_sec": 0 00:09:30.406 }, 00:09:30.406 "claimed": false, 00:09:30.406 "zoned": false, 00:09:30.406 "supported_io_types": { 00:09:30.406 "read": true, 00:09:30.406 "write": true, 00:09:30.406 "unmap": false, 00:09:30.406 "flush": false, 00:09:30.406 "reset": true, 00:09:30.406 "nvme_admin": false, 00:09:30.406 "nvme_io": false, 00:09:30.406 "nvme_io_md": false, 00:09:30.406 "write_zeroes": true, 00:09:30.406 "zcopy": false, 00:09:30.406 "get_zone_info": false, 00:09:30.406 "zone_management": false, 00:09:30.406 "zone_append": false, 00:09:30.406 "compare": false, 00:09:30.406 "compare_and_write": false, 00:09:30.406 "abort": false, 00:09:30.406 "seek_hole": false, 00:09:30.406 "seek_data": false, 00:09:30.406 "copy": false, 00:09:30.406 "nvme_iov_md": false 00:09:30.406 }, 00:09:30.406 "memory_domains": [ 00:09:30.406 { 00:09:30.406 "dma_device_id": "system", 00:09:30.406 "dma_device_type": 1 00:09:30.406 }, 00:09:30.406 { 00:09:30.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.406 "dma_device_type": 2 00:09:30.406 }, 00:09:30.406 { 00:09:30.406 "dma_device_id": "system", 00:09:30.406 "dma_device_type": 1 00:09:30.406 }, 00:09:30.406 { 00:09:30.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.406 "dma_device_type": 2 00:09:30.406 } 00:09:30.406 ], 00:09:30.406 "driver_specific": { 00:09:30.406 "raid": { 00:09:30.406 "uuid": "84b100ef-bed2-4049-9c3d-dc65f1658794", 00:09:30.406 "strip_size_kb": 0, 00:09:30.406 "state": "online", 00:09:30.406 "raid_level": "raid1", 00:09:30.406 "superblock": false, 00:09:30.406 "num_base_bdevs": 2, 00:09:30.406 "num_base_bdevs_discovered": 2, 00:09:30.406 "num_base_bdevs_operational": 2, 00:09:30.406 "base_bdevs_list": [ 00:09:30.406 { 00:09:30.406 "name": "BaseBdev1", 00:09:30.406 "uuid": "02e55c3a-45f3-4eaa-a1f5-5c750ddd99f8", 00:09:30.406 "is_configured": true, 00:09:30.406 "data_offset": 0, 00:09:30.406 "data_size": 65536 00:09:30.406 }, 00:09:30.406 { 00:09:30.406 "name": "BaseBdev2", 00:09:30.406 "uuid": "eb1ece0c-5a05-4ee3-a3aa-e703a308200f", 00:09:30.406 "is_configured": true, 00:09:30.406 "data_offset": 0, 00:09:30.406 "data_size": 65536 00:09:30.406 } 00:09:30.406 ] 00:09:30.406 } 00:09:30.406 } 00:09:30.406 }' 00:09:30.406 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.666 BaseBdev2' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.666 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.667 [2024-10-11 09:43:15.222669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.926 "name": "Existed_Raid", 00:09:30.926 "uuid": "84b100ef-bed2-4049-9c3d-dc65f1658794", 00:09:30.926 "strip_size_kb": 0, 00:09:30.926 "state": "online", 00:09:30.926 "raid_level": "raid1", 00:09:30.926 "superblock": false, 00:09:30.926 "num_base_bdevs": 2, 00:09:30.926 "num_base_bdevs_discovered": 1, 00:09:30.926 "num_base_bdevs_operational": 1, 00:09:30.926 "base_bdevs_list": [ 00:09:30.926 { 00:09:30.926 "name": null, 00:09:30.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.926 "is_configured": false, 00:09:30.926 "data_offset": 0, 00:09:30.926 "data_size": 65536 00:09:30.926 }, 00:09:30.926 { 00:09:30.926 "name": "BaseBdev2", 00:09:30.926 "uuid": "eb1ece0c-5a05-4ee3-a3aa-e703a308200f", 00:09:30.926 "is_configured": true, 00:09:30.926 "data_offset": 0, 00:09:30.926 "data_size": 65536 00:09:30.926 } 00:09:30.926 ] 00:09:30.926 }' 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.926 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.192 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.192 [2024-10-11 09:43:15.800653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.192 [2024-10-11 09:43:15.800775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.452 [2024-10-11 09:43:15.901510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.452 [2024-10-11 09:43:15.901651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.453 [2024-10-11 09:43:15.901703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63126 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63126 ']' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63126 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63126 00:09:31.453 killing process with pid 63126 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63126' 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63126 00:09:31.453 [2024-10-11 09:43:15.999069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.453 09:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63126 00:09:31.453 [2024-10-11 09:43:16.018819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.836 00:09:32.836 real 0m5.226s 00:09:32.836 user 0m7.599s 00:09:32.836 sys 0m0.826s 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.836 ************************************ 00:09:32.836 END TEST raid_state_function_test 00:09:32.836 ************************************ 00:09:32.836 09:43:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:32.836 09:43:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:32.836 09:43:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.836 09:43:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.836 ************************************ 00:09:32.836 START TEST raid_state_function_test_sb 00:09:32.836 ************************************ 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63375 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63375' 00:09:32.836 Process raid pid: 63375 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63375 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63375 ']' 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.836 09:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.836 [2024-10-11 09:43:17.338346] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:32.836 [2024-10-11 09:43:17.338465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.096 [2024-10-11 09:43:17.503003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.096 [2024-10-11 09:43:17.634217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.356 [2024-10-11 09:43:17.872355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.356 [2024-10-11 09:43:17.872413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.616 [2024-10-11 09:43:18.203490] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.616 [2024-10-11 09:43:18.203549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.616 [2024-10-11 09:43:18.203559] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.616 [2024-10-11 09:43:18.203586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.616 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.875 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.875 "name": "Existed_Raid", 00:09:33.875 "uuid": "10ec6b16-6626-41e2-bcfb-cebb55ee4bf7", 00:09:33.875 "strip_size_kb": 0, 00:09:33.875 "state": "configuring", 00:09:33.875 "raid_level": "raid1", 00:09:33.875 "superblock": true, 00:09:33.875 "num_base_bdevs": 2, 00:09:33.875 "num_base_bdevs_discovered": 0, 00:09:33.875 "num_base_bdevs_operational": 2, 00:09:33.875 "base_bdevs_list": [ 00:09:33.875 { 00:09:33.875 "name": "BaseBdev1", 00:09:33.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.875 "is_configured": false, 00:09:33.875 "data_offset": 0, 00:09:33.875 "data_size": 0 00:09:33.875 }, 00:09:33.875 { 00:09:33.875 "name": "BaseBdev2", 00:09:33.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.875 "is_configured": false, 00:09:33.875 "data_offset": 0, 00:09:33.875 "data_size": 0 00:09:33.875 } 00:09:33.875 ] 00:09:33.875 }' 00:09:33.875 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.875 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.135 [2024-10-11 09:43:18.658684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.135 [2024-10-11 09:43:18.658851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.135 [2024-10-11 09:43:18.670698] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.135 [2024-10-11 09:43:18.670759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.135 [2024-10-11 09:43:18.670770] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.135 [2024-10-11 09:43:18.670783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.135 [2024-10-11 09:43:18.726430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.135 BaseBdev1 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.135 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.136 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.136 [ 00:09:34.136 { 00:09:34.136 "name": "BaseBdev1", 00:09:34.136 "aliases": [ 00:09:34.136 "b9ba19b5-28f1-409f-ad5d-9cd7ffd1ceb4" 00:09:34.136 ], 00:09:34.136 "product_name": "Malloc disk", 00:09:34.136 "block_size": 512, 00:09:34.136 "num_blocks": 65536, 00:09:34.136 "uuid": "b9ba19b5-28f1-409f-ad5d-9cd7ffd1ceb4", 00:09:34.136 "assigned_rate_limits": { 00:09:34.136 "rw_ios_per_sec": 0, 00:09:34.136 "rw_mbytes_per_sec": 0, 00:09:34.136 "r_mbytes_per_sec": 0, 00:09:34.136 "w_mbytes_per_sec": 0 00:09:34.136 }, 00:09:34.136 "claimed": true, 00:09:34.136 "claim_type": "exclusive_write", 00:09:34.136 "zoned": false, 00:09:34.136 "supported_io_types": { 00:09:34.136 "read": true, 00:09:34.136 "write": true, 00:09:34.136 "unmap": true, 00:09:34.136 "flush": true, 00:09:34.136 "reset": true, 00:09:34.136 "nvme_admin": false, 00:09:34.136 "nvme_io": false, 00:09:34.136 "nvme_io_md": false, 00:09:34.136 "write_zeroes": true, 00:09:34.136 "zcopy": true, 00:09:34.136 "get_zone_info": false, 00:09:34.136 "zone_management": false, 00:09:34.136 "zone_append": false, 00:09:34.136 "compare": false, 00:09:34.136 "compare_and_write": false, 00:09:34.136 "abort": true, 00:09:34.136 "seek_hole": false, 00:09:34.136 "seek_data": false, 00:09:34.136 "copy": true, 00:09:34.136 "nvme_iov_md": false 00:09:34.136 }, 00:09:34.136 "memory_domains": [ 00:09:34.136 { 00:09:34.136 "dma_device_id": "system", 00:09:34.395 "dma_device_type": 1 00:09:34.395 }, 00:09:34.395 { 00:09:34.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.395 "dma_device_type": 2 00:09:34.395 } 00:09:34.395 ], 00:09:34.395 "driver_specific": {} 00:09:34.395 } 00:09:34.395 ] 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.395 "name": "Existed_Raid", 00:09:34.395 "uuid": "12044a18-0be7-459b-a4e2-96054a6559db", 00:09:34.395 "strip_size_kb": 0, 00:09:34.395 "state": "configuring", 00:09:34.395 "raid_level": "raid1", 00:09:34.395 "superblock": true, 00:09:34.395 "num_base_bdevs": 2, 00:09:34.395 "num_base_bdevs_discovered": 1, 00:09:34.395 "num_base_bdevs_operational": 2, 00:09:34.395 "base_bdevs_list": [ 00:09:34.395 { 00:09:34.395 "name": "BaseBdev1", 00:09:34.395 "uuid": "b9ba19b5-28f1-409f-ad5d-9cd7ffd1ceb4", 00:09:34.395 "is_configured": true, 00:09:34.395 "data_offset": 2048, 00:09:34.395 "data_size": 63488 00:09:34.395 }, 00:09:34.395 { 00:09:34.395 "name": "BaseBdev2", 00:09:34.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.395 "is_configured": false, 00:09:34.395 "data_offset": 0, 00:09:34.395 "data_size": 0 00:09:34.395 } 00:09:34.395 ] 00:09:34.395 }' 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.395 09:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.655 [2024-10-11 09:43:19.205718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.655 [2024-10-11 09:43:19.205804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.655 [2024-10-11 09:43:19.217784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.655 [2024-10-11 09:43:19.219952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.655 [2024-10-11 09:43:19.220040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.655 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.655 "name": "Existed_Raid", 00:09:34.655 "uuid": "2eef894a-bc2d-4198-9c01-ac4db2184f0e", 00:09:34.655 "strip_size_kb": 0, 00:09:34.655 "state": "configuring", 00:09:34.655 "raid_level": "raid1", 00:09:34.655 "superblock": true, 00:09:34.655 "num_base_bdevs": 2, 00:09:34.655 "num_base_bdevs_discovered": 1, 00:09:34.655 "num_base_bdevs_operational": 2, 00:09:34.655 "base_bdevs_list": [ 00:09:34.655 { 00:09:34.655 "name": "BaseBdev1", 00:09:34.655 "uuid": "b9ba19b5-28f1-409f-ad5d-9cd7ffd1ceb4", 00:09:34.655 "is_configured": true, 00:09:34.655 "data_offset": 2048, 00:09:34.655 "data_size": 63488 00:09:34.655 }, 00:09:34.655 { 00:09:34.655 "name": "BaseBdev2", 00:09:34.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.655 "is_configured": false, 00:09:34.656 "data_offset": 0, 00:09:34.656 "data_size": 0 00:09:34.656 } 00:09:34.656 ] 00:09:34.656 }' 00:09:34.656 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.656 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.224 [2024-10-11 09:43:19.721304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.224 [2024-10-11 09:43:19.721710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.224 [2024-10-11 09:43:19.721813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.224 [2024-10-11 09:43:19.722159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:35.224 BaseBdev2 00:09:35.224 [2024-10-11 09:43:19.722431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.224 [2024-10-11 09:43:19.722486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.224 [2024-10-11 09:43:19.722774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.224 [ 00:09:35.224 { 00:09:35.224 "name": "BaseBdev2", 00:09:35.224 "aliases": [ 00:09:35.224 "2d94e41a-7b78-4b68-92e5-0d446477a405" 00:09:35.224 ], 00:09:35.224 "product_name": "Malloc disk", 00:09:35.224 "block_size": 512, 00:09:35.224 "num_blocks": 65536, 00:09:35.224 "uuid": "2d94e41a-7b78-4b68-92e5-0d446477a405", 00:09:35.224 "assigned_rate_limits": { 00:09:35.224 "rw_ios_per_sec": 0, 00:09:35.224 "rw_mbytes_per_sec": 0, 00:09:35.224 "r_mbytes_per_sec": 0, 00:09:35.224 "w_mbytes_per_sec": 0 00:09:35.224 }, 00:09:35.224 "claimed": true, 00:09:35.224 "claim_type": "exclusive_write", 00:09:35.224 "zoned": false, 00:09:35.224 "supported_io_types": { 00:09:35.224 "read": true, 00:09:35.224 "write": true, 00:09:35.224 "unmap": true, 00:09:35.224 "flush": true, 00:09:35.224 "reset": true, 00:09:35.224 "nvme_admin": false, 00:09:35.224 "nvme_io": false, 00:09:35.224 "nvme_io_md": false, 00:09:35.224 "write_zeroes": true, 00:09:35.224 "zcopy": true, 00:09:35.224 "get_zone_info": false, 00:09:35.224 "zone_management": false, 00:09:35.224 "zone_append": false, 00:09:35.224 "compare": false, 00:09:35.224 "compare_and_write": false, 00:09:35.224 "abort": true, 00:09:35.224 "seek_hole": false, 00:09:35.224 "seek_data": false, 00:09:35.224 "copy": true, 00:09:35.224 "nvme_iov_md": false 00:09:35.224 }, 00:09:35.224 "memory_domains": [ 00:09:35.224 { 00:09:35.224 "dma_device_id": "system", 00:09:35.224 "dma_device_type": 1 00:09:35.224 }, 00:09:35.224 { 00:09:35.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.224 "dma_device_type": 2 00:09:35.224 } 00:09:35.224 ], 00:09:35.224 "driver_specific": {} 00:09:35.224 } 00:09:35.224 ] 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.224 "name": "Existed_Raid", 00:09:35.224 "uuid": "2eef894a-bc2d-4198-9c01-ac4db2184f0e", 00:09:35.224 "strip_size_kb": 0, 00:09:35.224 "state": "online", 00:09:35.224 "raid_level": "raid1", 00:09:35.224 "superblock": true, 00:09:35.224 "num_base_bdevs": 2, 00:09:35.224 "num_base_bdevs_discovered": 2, 00:09:35.224 "num_base_bdevs_operational": 2, 00:09:35.224 "base_bdevs_list": [ 00:09:35.224 { 00:09:35.224 "name": "BaseBdev1", 00:09:35.224 "uuid": "b9ba19b5-28f1-409f-ad5d-9cd7ffd1ceb4", 00:09:35.224 "is_configured": true, 00:09:35.224 "data_offset": 2048, 00:09:35.224 "data_size": 63488 00:09:35.224 }, 00:09:35.224 { 00:09:35.224 "name": "BaseBdev2", 00:09:35.224 "uuid": "2d94e41a-7b78-4b68-92e5-0d446477a405", 00:09:35.224 "is_configured": true, 00:09:35.224 "data_offset": 2048, 00:09:35.224 "data_size": 63488 00:09:35.224 } 00:09:35.224 ] 00:09:35.224 }' 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.224 09:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.793 [2024-10-11 09:43:20.260831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.793 "name": "Existed_Raid", 00:09:35.793 "aliases": [ 00:09:35.793 "2eef894a-bc2d-4198-9c01-ac4db2184f0e" 00:09:35.793 ], 00:09:35.793 "product_name": "Raid Volume", 00:09:35.793 "block_size": 512, 00:09:35.793 "num_blocks": 63488, 00:09:35.793 "uuid": "2eef894a-bc2d-4198-9c01-ac4db2184f0e", 00:09:35.793 "assigned_rate_limits": { 00:09:35.793 "rw_ios_per_sec": 0, 00:09:35.793 "rw_mbytes_per_sec": 0, 00:09:35.793 "r_mbytes_per_sec": 0, 00:09:35.793 "w_mbytes_per_sec": 0 00:09:35.793 }, 00:09:35.793 "claimed": false, 00:09:35.793 "zoned": false, 00:09:35.793 "supported_io_types": { 00:09:35.793 "read": true, 00:09:35.793 "write": true, 00:09:35.793 "unmap": false, 00:09:35.793 "flush": false, 00:09:35.793 "reset": true, 00:09:35.793 "nvme_admin": false, 00:09:35.793 "nvme_io": false, 00:09:35.793 "nvme_io_md": false, 00:09:35.793 "write_zeroes": true, 00:09:35.793 "zcopy": false, 00:09:35.793 "get_zone_info": false, 00:09:35.793 "zone_management": false, 00:09:35.793 "zone_append": false, 00:09:35.793 "compare": false, 00:09:35.793 "compare_and_write": false, 00:09:35.793 "abort": false, 00:09:35.793 "seek_hole": false, 00:09:35.793 "seek_data": false, 00:09:35.793 "copy": false, 00:09:35.793 "nvme_iov_md": false 00:09:35.793 }, 00:09:35.793 "memory_domains": [ 00:09:35.793 { 00:09:35.793 "dma_device_id": "system", 00:09:35.793 "dma_device_type": 1 00:09:35.793 }, 00:09:35.793 { 00:09:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.793 "dma_device_type": 2 00:09:35.793 }, 00:09:35.793 { 00:09:35.793 "dma_device_id": "system", 00:09:35.793 "dma_device_type": 1 00:09:35.793 }, 00:09:35.793 { 00:09:35.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.793 "dma_device_type": 2 00:09:35.793 } 00:09:35.793 ], 00:09:35.793 "driver_specific": { 00:09:35.793 "raid": { 00:09:35.793 "uuid": "2eef894a-bc2d-4198-9c01-ac4db2184f0e", 00:09:35.793 "strip_size_kb": 0, 00:09:35.793 "state": "online", 00:09:35.793 "raid_level": "raid1", 00:09:35.793 "superblock": true, 00:09:35.793 "num_base_bdevs": 2, 00:09:35.793 "num_base_bdevs_discovered": 2, 00:09:35.793 "num_base_bdevs_operational": 2, 00:09:35.793 "base_bdevs_list": [ 00:09:35.793 { 00:09:35.793 "name": "BaseBdev1", 00:09:35.793 "uuid": "b9ba19b5-28f1-409f-ad5d-9cd7ffd1ceb4", 00:09:35.793 "is_configured": true, 00:09:35.793 "data_offset": 2048, 00:09:35.793 "data_size": 63488 00:09:35.793 }, 00:09:35.793 { 00:09:35.793 "name": "BaseBdev2", 00:09:35.793 "uuid": "2d94e41a-7b78-4b68-92e5-0d446477a405", 00:09:35.793 "is_configured": true, 00:09:35.793 "data_offset": 2048, 00:09:35.793 "data_size": 63488 00:09:35.793 } 00:09:35.793 ] 00:09:35.793 } 00:09:35.793 } 00:09:35.793 }' 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.793 BaseBdev2' 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.793 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.052 [2024-10-11 09:43:20.508131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.052 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.053 "name": "Existed_Raid", 00:09:36.053 "uuid": "2eef894a-bc2d-4198-9c01-ac4db2184f0e", 00:09:36.053 "strip_size_kb": 0, 00:09:36.053 "state": "online", 00:09:36.053 "raid_level": "raid1", 00:09:36.053 "superblock": true, 00:09:36.053 "num_base_bdevs": 2, 00:09:36.053 "num_base_bdevs_discovered": 1, 00:09:36.053 "num_base_bdevs_operational": 1, 00:09:36.053 "base_bdevs_list": [ 00:09:36.053 { 00:09:36.053 "name": null, 00:09:36.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.053 "is_configured": false, 00:09:36.053 "data_offset": 0, 00:09:36.053 "data_size": 63488 00:09:36.053 }, 00:09:36.053 { 00:09:36.053 "name": "BaseBdev2", 00:09:36.053 "uuid": "2d94e41a-7b78-4b68-92e5-0d446477a405", 00:09:36.053 "is_configured": true, 00:09:36.053 "data_offset": 2048, 00:09:36.053 "data_size": 63488 00:09:36.053 } 00:09:36.053 ] 00:09:36.053 }' 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.053 09:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.621 [2024-10-11 09:43:21.075021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.621 [2024-10-11 09:43:21.075241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.621 [2024-10-11 09:43:21.169920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.621 [2024-10-11 09:43:21.170106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.621 [2024-10-11 09:43:21.170150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63375 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63375 ']' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63375 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.621 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63375 00:09:36.881 killing process with pid 63375 00:09:36.881 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.881 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.881 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63375' 00:09:36.881 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63375 00:09:36.881 [2024-10-11 09:43:21.262907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.881 09:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63375 00:09:36.881 [2024-10-11 09:43:21.281426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.820 09:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.820 00:09:37.820 real 0m5.176s 00:09:37.820 user 0m7.489s 00:09:37.820 sys 0m0.841s 00:09:37.820 ************************************ 00:09:37.820 END TEST raid_state_function_test_sb 00:09:37.820 ************************************ 00:09:37.820 09:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.820 09:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.080 09:43:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:38.080 09:43:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.080 09:43:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.080 09:43:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.080 ************************************ 00:09:38.080 START TEST raid_superblock_test 00:09:38.080 ************************************ 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63626 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63626 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63626 ']' 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.080 09:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.080 [2024-10-11 09:43:22.574548] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:38.080 [2024-10-11 09:43:22.574817] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:09:38.340 [2024-10-11 09:43:22.739766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.340 [2024-10-11 09:43:22.864021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.599 [2024-10-11 09:43:23.086843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.599 [2024-10-11 09:43:23.086905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.859 malloc1 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.859 [2024-10-11 09:43:23.482043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.859 [2024-10-11 09:43:23.482172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.859 [2024-10-11 09:43:23.482226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:38.859 [2024-10-11 09:43:23.482259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.859 [2024-10-11 09:43:23.484433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.859 [2024-10-11 09:43:23.484510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.859 pt1 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.859 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.119 malloc2 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.119 [2024-10-11 09:43:23.540542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.119 [2024-10-11 09:43:23.540604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.119 [2024-10-11 09:43:23.540625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.119 [2024-10-11 09:43:23.540634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.119 [2024-10-11 09:43:23.542715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.119 [2024-10-11 09:43:23.542801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.119 pt2 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.119 [2024-10-11 09:43:23.552570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.119 [2024-10-11 09:43:23.554396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.119 [2024-10-11 09:43:23.554617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:39.119 [2024-10-11 09:43:23.554637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.119 [2024-10-11 09:43:23.554893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:39.119 [2024-10-11 09:43:23.555061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:39.119 [2024-10-11 09:43:23.555075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:39.119 [2024-10-11 09:43:23.555237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.119 "name": "raid_bdev1", 00:09:39.119 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:39.119 "strip_size_kb": 0, 00:09:39.119 "state": "online", 00:09:39.119 "raid_level": "raid1", 00:09:39.119 "superblock": true, 00:09:39.119 "num_base_bdevs": 2, 00:09:39.119 "num_base_bdevs_discovered": 2, 00:09:39.119 "num_base_bdevs_operational": 2, 00:09:39.119 "base_bdevs_list": [ 00:09:39.119 { 00:09:39.119 "name": "pt1", 00:09:39.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.119 "is_configured": true, 00:09:39.119 "data_offset": 2048, 00:09:39.119 "data_size": 63488 00:09:39.119 }, 00:09:39.119 { 00:09:39.119 "name": "pt2", 00:09:39.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.119 "is_configured": true, 00:09:39.119 "data_offset": 2048, 00:09:39.119 "data_size": 63488 00:09:39.119 } 00:09:39.119 ] 00:09:39.119 }' 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.119 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.378 09:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.378 [2024-10-11 09:43:23.992195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.378 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.637 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.637 "name": "raid_bdev1", 00:09:39.637 "aliases": [ 00:09:39.637 "a55d0829-3034-454f-814d-9a870493ecf6" 00:09:39.637 ], 00:09:39.637 "product_name": "Raid Volume", 00:09:39.637 "block_size": 512, 00:09:39.637 "num_blocks": 63488, 00:09:39.637 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:39.637 "assigned_rate_limits": { 00:09:39.637 "rw_ios_per_sec": 0, 00:09:39.637 "rw_mbytes_per_sec": 0, 00:09:39.637 "r_mbytes_per_sec": 0, 00:09:39.637 "w_mbytes_per_sec": 0 00:09:39.637 }, 00:09:39.637 "claimed": false, 00:09:39.637 "zoned": false, 00:09:39.637 "supported_io_types": { 00:09:39.637 "read": true, 00:09:39.637 "write": true, 00:09:39.637 "unmap": false, 00:09:39.637 "flush": false, 00:09:39.637 "reset": true, 00:09:39.637 "nvme_admin": false, 00:09:39.637 "nvme_io": false, 00:09:39.637 "nvme_io_md": false, 00:09:39.637 "write_zeroes": true, 00:09:39.637 "zcopy": false, 00:09:39.637 "get_zone_info": false, 00:09:39.637 "zone_management": false, 00:09:39.637 "zone_append": false, 00:09:39.637 "compare": false, 00:09:39.637 "compare_and_write": false, 00:09:39.637 "abort": false, 00:09:39.637 "seek_hole": false, 00:09:39.637 "seek_data": false, 00:09:39.637 "copy": false, 00:09:39.637 "nvme_iov_md": false 00:09:39.637 }, 00:09:39.637 "memory_domains": [ 00:09:39.637 { 00:09:39.637 "dma_device_id": "system", 00:09:39.637 "dma_device_type": 1 00:09:39.637 }, 00:09:39.637 { 00:09:39.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.637 "dma_device_type": 2 00:09:39.637 }, 00:09:39.637 { 00:09:39.637 "dma_device_id": "system", 00:09:39.637 "dma_device_type": 1 00:09:39.637 }, 00:09:39.637 { 00:09:39.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.637 "dma_device_type": 2 00:09:39.637 } 00:09:39.637 ], 00:09:39.637 "driver_specific": { 00:09:39.637 "raid": { 00:09:39.637 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:39.637 "strip_size_kb": 0, 00:09:39.637 "state": "online", 00:09:39.637 "raid_level": "raid1", 00:09:39.637 "superblock": true, 00:09:39.637 "num_base_bdevs": 2, 00:09:39.637 "num_base_bdevs_discovered": 2, 00:09:39.637 "num_base_bdevs_operational": 2, 00:09:39.637 "base_bdevs_list": [ 00:09:39.637 { 00:09:39.637 "name": "pt1", 00:09:39.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.637 "is_configured": true, 00:09:39.637 "data_offset": 2048, 00:09:39.637 "data_size": 63488 00:09:39.637 }, 00:09:39.637 { 00:09:39.637 "name": "pt2", 00:09:39.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.637 "is_configured": true, 00:09:39.637 "data_offset": 2048, 00:09:39.637 "data_size": 63488 00:09:39.637 } 00:09:39.637 ] 00:09:39.637 } 00:09:39.637 } 00:09:39.637 }' 00:09:39.637 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.637 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:39.637 pt2' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.638 [2024-10-11 09:43:24.231798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.638 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a55d0829-3034-454f-814d-9a870493ecf6 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a55d0829-3034-454f-814d-9a870493ecf6 ']' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 [2024-10-11 09:43:24.279388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.897 [2024-10-11 09:43:24.279501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.897 [2024-10-11 09:43:24.279615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.897 [2024-10-11 09:43:24.279684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.897 [2024-10-11 09:43:24.279698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 [2024-10-11 09:43:24.411179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:39.897 [2024-10-11 09:43:24.413415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:39.897 [2024-10-11 09:43:24.413554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:39.897 [2024-10-11 09:43:24.413668] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:39.897 [2024-10-11 09:43:24.413769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.897 [2024-10-11 09:43:24.413808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:39.897 request: 00:09:39.897 { 00:09:39.897 "name": "raid_bdev1", 00:09:39.897 "raid_level": "raid1", 00:09:39.897 "base_bdevs": [ 00:09:39.897 "malloc1", 00:09:39.897 "malloc2" 00:09:39.897 ], 00:09:39.897 "superblock": false, 00:09:39.897 "method": "bdev_raid_create", 00:09:39.897 "req_id": 1 00:09:39.897 } 00:09:39.897 Got JSON-RPC error response 00:09:39.897 response: 00:09:39.897 { 00:09:39.897 "code": -17, 00:09:39.897 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:39.897 } 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 [2024-10-11 09:43:24.475045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.897 [2024-10-11 09:43:24.475170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.897 [2024-10-11 09:43:24.475215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.897 [2024-10-11 09:43:24.475278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.897 [2024-10-11 09:43:24.477877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.897 [2024-10-11 09:43:24.477964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.897 [2024-10-11 09:43:24.478099] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:39.897 [2024-10-11 09:43:24.478197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.897 pt1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.897 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.155 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.155 "name": "raid_bdev1", 00:09:40.155 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:40.155 "strip_size_kb": 0, 00:09:40.156 "state": "configuring", 00:09:40.156 "raid_level": "raid1", 00:09:40.156 "superblock": true, 00:09:40.156 "num_base_bdevs": 2, 00:09:40.156 "num_base_bdevs_discovered": 1, 00:09:40.156 "num_base_bdevs_operational": 2, 00:09:40.156 "base_bdevs_list": [ 00:09:40.156 { 00:09:40.156 "name": "pt1", 00:09:40.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.156 "is_configured": true, 00:09:40.156 "data_offset": 2048, 00:09:40.156 "data_size": 63488 00:09:40.156 }, 00:09:40.156 { 00:09:40.156 "name": null, 00:09:40.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.156 "is_configured": false, 00:09:40.156 "data_offset": 2048, 00:09:40.156 "data_size": 63488 00:09:40.156 } 00:09:40.156 ] 00:09:40.156 }' 00:09:40.156 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.156 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.415 [2024-10-11 09:43:24.942239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.415 [2024-10-11 09:43:24.942383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.415 [2024-10-11 09:43:24.942417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:40.415 [2024-10-11 09:43:24.942430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.415 [2024-10-11 09:43:24.942996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.415 [2024-10-11 09:43:24.943022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.415 [2024-10-11 09:43:24.943126] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.415 [2024-10-11 09:43:24.943152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.415 [2024-10-11 09:43:24.943286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.415 [2024-10-11 09:43:24.943299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.415 [2024-10-11 09:43:24.943560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:40.415 [2024-10-11 09:43:24.943772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.415 [2024-10-11 09:43:24.943802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.415 [2024-10-11 09:43:24.944000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.415 pt2 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.415 "name": "raid_bdev1", 00:09:40.415 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:40.415 "strip_size_kb": 0, 00:09:40.415 "state": "online", 00:09:40.415 "raid_level": "raid1", 00:09:40.415 "superblock": true, 00:09:40.415 "num_base_bdevs": 2, 00:09:40.415 "num_base_bdevs_discovered": 2, 00:09:40.415 "num_base_bdevs_operational": 2, 00:09:40.415 "base_bdevs_list": [ 00:09:40.415 { 00:09:40.415 "name": "pt1", 00:09:40.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.415 "is_configured": true, 00:09:40.415 "data_offset": 2048, 00:09:40.415 "data_size": 63488 00:09:40.415 }, 00:09:40.415 { 00:09:40.415 "name": "pt2", 00:09:40.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.415 "is_configured": true, 00:09:40.415 "data_offset": 2048, 00:09:40.415 "data_size": 63488 00:09:40.415 } 00:09:40.415 ] 00:09:40.415 }' 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.415 09:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.982 [2024-10-11 09:43:25.441701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.982 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.982 "name": "raid_bdev1", 00:09:40.982 "aliases": [ 00:09:40.982 "a55d0829-3034-454f-814d-9a870493ecf6" 00:09:40.982 ], 00:09:40.982 "product_name": "Raid Volume", 00:09:40.982 "block_size": 512, 00:09:40.982 "num_blocks": 63488, 00:09:40.982 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:40.982 "assigned_rate_limits": { 00:09:40.982 "rw_ios_per_sec": 0, 00:09:40.982 "rw_mbytes_per_sec": 0, 00:09:40.982 "r_mbytes_per_sec": 0, 00:09:40.982 "w_mbytes_per_sec": 0 00:09:40.982 }, 00:09:40.982 "claimed": false, 00:09:40.982 "zoned": false, 00:09:40.982 "supported_io_types": { 00:09:40.982 "read": true, 00:09:40.982 "write": true, 00:09:40.982 "unmap": false, 00:09:40.982 "flush": false, 00:09:40.982 "reset": true, 00:09:40.982 "nvme_admin": false, 00:09:40.982 "nvme_io": false, 00:09:40.982 "nvme_io_md": false, 00:09:40.982 "write_zeroes": true, 00:09:40.982 "zcopy": false, 00:09:40.982 "get_zone_info": false, 00:09:40.982 "zone_management": false, 00:09:40.982 "zone_append": false, 00:09:40.982 "compare": false, 00:09:40.982 "compare_and_write": false, 00:09:40.982 "abort": false, 00:09:40.982 "seek_hole": false, 00:09:40.982 "seek_data": false, 00:09:40.982 "copy": false, 00:09:40.982 "nvme_iov_md": false 00:09:40.982 }, 00:09:40.982 "memory_domains": [ 00:09:40.982 { 00:09:40.982 "dma_device_id": "system", 00:09:40.982 "dma_device_type": 1 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.982 "dma_device_type": 2 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "system", 00:09:40.982 "dma_device_type": 1 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.983 "dma_device_type": 2 00:09:40.983 } 00:09:40.983 ], 00:09:40.983 "driver_specific": { 00:09:40.983 "raid": { 00:09:40.983 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:40.983 "strip_size_kb": 0, 00:09:40.983 "state": "online", 00:09:40.983 "raid_level": "raid1", 00:09:40.983 "superblock": true, 00:09:40.983 "num_base_bdevs": 2, 00:09:40.983 "num_base_bdevs_discovered": 2, 00:09:40.983 "num_base_bdevs_operational": 2, 00:09:40.983 "base_bdevs_list": [ 00:09:40.983 { 00:09:40.983 "name": "pt1", 00:09:40.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.983 "is_configured": true, 00:09:40.983 "data_offset": 2048, 00:09:40.983 "data_size": 63488 00:09:40.983 }, 00:09:40.983 { 00:09:40.983 "name": "pt2", 00:09:40.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.983 "is_configured": true, 00:09:40.983 "data_offset": 2048, 00:09:40.983 "data_size": 63488 00:09:40.983 } 00:09:40.983 ] 00:09:40.983 } 00:09:40.983 } 00:09:40.983 }' 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.983 pt2' 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.983 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 [2024-10-11 09:43:25.681279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a55d0829-3034-454f-814d-9a870493ecf6 '!=' a55d0829-3034-454f-814d-9a870493ecf6 ']' 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 [2024-10-11 09:43:25.728959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.242 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.243 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.243 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.243 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.243 "name": "raid_bdev1", 00:09:41.243 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:41.243 "strip_size_kb": 0, 00:09:41.243 "state": "online", 00:09:41.243 "raid_level": "raid1", 00:09:41.243 "superblock": true, 00:09:41.243 "num_base_bdevs": 2, 00:09:41.243 "num_base_bdevs_discovered": 1, 00:09:41.243 "num_base_bdevs_operational": 1, 00:09:41.243 "base_bdevs_list": [ 00:09:41.243 { 00:09:41.243 "name": null, 00:09:41.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.243 "is_configured": false, 00:09:41.243 "data_offset": 0, 00:09:41.243 "data_size": 63488 00:09:41.243 }, 00:09:41.243 { 00:09:41.243 "name": "pt2", 00:09:41.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.243 "is_configured": true, 00:09:41.243 "data_offset": 2048, 00:09:41.243 "data_size": 63488 00:09:41.243 } 00:09:41.243 ] 00:09:41.243 }' 00:09:41.243 09:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.243 09:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.811 [2024-10-11 09:43:26.212107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.811 [2024-10-11 09:43:26.212210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.811 [2024-10-11 09:43:26.212365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.811 [2024-10-11 09:43:26.212469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.811 [2024-10-11 09:43:26.212525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.811 [2024-10-11 09:43:26.284021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.811 [2024-10-11 09:43:26.284195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.811 [2024-10-11 09:43:26.284223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:41.811 [2024-10-11 09:43:26.284235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.811 [2024-10-11 09:43:26.286766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.811 [2024-10-11 09:43:26.286816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.811 [2024-10-11 09:43:26.286931] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.811 [2024-10-11 09:43:26.286986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.811 [2024-10-11 09:43:26.287105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:41.811 [2024-10-11 09:43:26.287120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.811 [2024-10-11 09:43:26.287397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:41.811 [2024-10-11 09:43:26.287643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:41.811 [2024-10-11 09:43:26.287659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:41.811 [2024-10-11 09:43:26.287905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.811 pt2 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.811 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.812 "name": "raid_bdev1", 00:09:41.812 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:41.812 "strip_size_kb": 0, 00:09:41.812 "state": "online", 00:09:41.812 "raid_level": "raid1", 00:09:41.812 "superblock": true, 00:09:41.812 "num_base_bdevs": 2, 00:09:41.812 "num_base_bdevs_discovered": 1, 00:09:41.812 "num_base_bdevs_operational": 1, 00:09:41.812 "base_bdevs_list": [ 00:09:41.812 { 00:09:41.812 "name": null, 00:09:41.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.812 "is_configured": false, 00:09:41.812 "data_offset": 2048, 00:09:41.812 "data_size": 63488 00:09:41.812 }, 00:09:41.812 { 00:09:41.812 "name": "pt2", 00:09:41.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.812 "is_configured": true, 00:09:41.812 "data_offset": 2048, 00:09:41.812 "data_size": 63488 00:09:41.812 } 00:09:41.812 ] 00:09:41.812 }' 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.812 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.380 [2024-10-11 09:43:26.711316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.380 [2024-10-11 09:43:26.711352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.380 [2024-10-11 09:43:26.711444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.380 [2024-10-11 09:43:26.711503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.380 [2024-10-11 09:43:26.711514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.380 [2024-10-11 09:43:26.771249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.380 [2024-10-11 09:43:26.771403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.380 [2024-10-11 09:43:26.771446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:42.380 [2024-10-11 09:43:26.771459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.380 [2024-10-11 09:43:26.774070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.380 [2024-10-11 09:43:26.774107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.380 [2024-10-11 09:43:26.774226] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:42.380 [2024-10-11 09:43:26.774274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.380 [2024-10-11 09:43:26.774420] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:42.380 [2024-10-11 09:43:26.774432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.380 [2024-10-11 09:43:26.774449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:42.380 [2024-10-11 09:43:26.774515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.380 [2024-10-11 09:43:26.774617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:42.380 [2024-10-11 09:43:26.774627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.380 [2024-10-11 09:43:26.774890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:42.380 [2024-10-11 09:43:26.775076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:42.380 [2024-10-11 09:43:26.775090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:42.380 [2024-10-11 09:43:26.775313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.380 pt1 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.380 "name": "raid_bdev1", 00:09:42.380 "uuid": "a55d0829-3034-454f-814d-9a870493ecf6", 00:09:42.380 "strip_size_kb": 0, 00:09:42.380 "state": "online", 00:09:42.380 "raid_level": "raid1", 00:09:42.380 "superblock": true, 00:09:42.380 "num_base_bdevs": 2, 00:09:42.380 "num_base_bdevs_discovered": 1, 00:09:42.380 "num_base_bdevs_operational": 1, 00:09:42.380 "base_bdevs_list": [ 00:09:42.380 { 00:09:42.380 "name": null, 00:09:42.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.380 "is_configured": false, 00:09:42.380 "data_offset": 2048, 00:09:42.380 "data_size": 63488 00:09:42.380 }, 00:09:42.380 { 00:09:42.380 "name": "pt2", 00:09:42.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.380 "is_configured": true, 00:09:42.380 "data_offset": 2048, 00:09:42.380 "data_size": 63488 00:09:42.380 } 00:09:42.380 ] 00:09:42.380 }' 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.380 09:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.639 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:42.639 [2024-10-11 09:43:27.259017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a55d0829-3034-454f-814d-9a870493ecf6 '!=' a55d0829-3034-454f-814d-9a870493ecf6 ']' 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63626 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63626 ']' 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63626 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63626 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63626' 00:09:42.898 killing process with pid 63626 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63626 00:09:42.898 [2024-10-11 09:43:27.347522] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.898 [2024-10-11 09:43:27.347704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.898 09:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63626 00:09:42.898 [2024-10-11 09:43:27.347776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.898 [2024-10-11 09:43:27.347823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:43.157 [2024-10-11 09:43:27.563305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.095 09:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.095 00:09:44.095 real 0m6.248s 00:09:44.095 user 0m9.479s 00:09:44.095 sys 0m1.046s 00:09:44.095 09:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.095 09:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.095 ************************************ 00:09:44.354 END TEST raid_superblock_test 00:09:44.354 ************************************ 00:09:44.354 09:43:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:44.354 09:43:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:44.354 09:43:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.354 09:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.354 ************************************ 00:09:44.354 START TEST raid_read_error_test 00:09:44.354 ************************************ 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7XrFg6n3KA 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63956 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63956 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63956 ']' 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.354 09:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.354 [2024-10-11 09:43:28.907217] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:44.354 [2024-10-11 09:43:28.907452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63956 ] 00:09:44.612 [2024-10-11 09:43:29.073136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.612 [2024-10-11 09:43:29.202570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.870 [2024-10-11 09:43:29.433565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.870 [2024-10-11 09:43:29.433717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.437 BaseBdev1_malloc 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.437 true 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.437 [2024-10-11 09:43:29.831530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.437 [2024-10-11 09:43:29.831592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.437 [2024-10-11 09:43:29.831612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.437 [2024-10-11 09:43:29.831623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.437 [2024-10-11 09:43:29.833890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.437 [2024-10-11 09:43:29.833995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.437 BaseBdev1 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.437 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.438 BaseBdev2_malloc 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.438 true 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.438 [2024-10-11 09:43:29.898257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.438 [2024-10-11 09:43:29.898322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.438 [2024-10-11 09:43:29.898341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.438 [2024-10-11 09:43:29.898352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.438 [2024-10-11 09:43:29.900796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.438 [2024-10-11 09:43:29.900843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.438 BaseBdev2 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.438 [2024-10-11 09:43:29.906306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.438 [2024-10-11 09:43:29.908435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.438 [2024-10-11 09:43:29.908723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.438 [2024-10-11 09:43:29.908766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.438 [2024-10-11 09:43:29.909073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.438 [2024-10-11 09:43:29.909272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.438 [2024-10-11 09:43:29.909284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:45.438 [2024-10-11 09:43:29.909453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.438 "name": "raid_bdev1", 00:09:45.438 "uuid": "d5b4d075-0b14-403b-9ed0-a35ce97a9385", 00:09:45.438 "strip_size_kb": 0, 00:09:45.438 "state": "online", 00:09:45.438 "raid_level": "raid1", 00:09:45.438 "superblock": true, 00:09:45.438 "num_base_bdevs": 2, 00:09:45.438 "num_base_bdevs_discovered": 2, 00:09:45.438 "num_base_bdevs_operational": 2, 00:09:45.438 "base_bdevs_list": [ 00:09:45.438 { 00:09:45.438 "name": "BaseBdev1", 00:09:45.438 "uuid": "b946739f-9be3-58d4-aa69-7c32ebfa8468", 00:09:45.438 "is_configured": true, 00:09:45.438 "data_offset": 2048, 00:09:45.438 "data_size": 63488 00:09:45.438 }, 00:09:45.438 { 00:09:45.438 "name": "BaseBdev2", 00:09:45.438 "uuid": "561d562a-bf33-5a9f-baf4-8997aea6f87c", 00:09:45.438 "is_configured": true, 00:09:45.438 "data_offset": 2048, 00:09:45.438 "data_size": 63488 00:09:45.438 } 00:09:45.438 ] 00:09:45.438 }' 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.438 09:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.007 09:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.007 09:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.007 [2024-10-11 09:43:30.451050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.946 "name": "raid_bdev1", 00:09:46.946 "uuid": "d5b4d075-0b14-403b-9ed0-a35ce97a9385", 00:09:46.946 "strip_size_kb": 0, 00:09:46.946 "state": "online", 00:09:46.946 "raid_level": "raid1", 00:09:46.946 "superblock": true, 00:09:46.946 "num_base_bdevs": 2, 00:09:46.946 "num_base_bdevs_discovered": 2, 00:09:46.946 "num_base_bdevs_operational": 2, 00:09:46.946 "base_bdevs_list": [ 00:09:46.946 { 00:09:46.946 "name": "BaseBdev1", 00:09:46.946 "uuid": "b946739f-9be3-58d4-aa69-7c32ebfa8468", 00:09:46.946 "is_configured": true, 00:09:46.946 "data_offset": 2048, 00:09:46.946 "data_size": 63488 00:09:46.946 }, 00:09:46.946 { 00:09:46.946 "name": "BaseBdev2", 00:09:46.946 "uuid": "561d562a-bf33-5a9f-baf4-8997aea6f87c", 00:09:46.946 "is_configured": true, 00:09:46.946 "data_offset": 2048, 00:09:46.946 "data_size": 63488 00:09:46.946 } 00:09:46.946 ] 00:09:46.946 }' 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.946 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.206 [2024-10-11 09:43:31.789791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.206 [2024-10-11 09:43:31.789900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.206 [2024-10-11 09:43:31.792869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.206 [2024-10-11 09:43:31.792926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.206 [2024-10-11 09:43:31.793011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.206 [2024-10-11 09:43:31.793024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.206 { 00:09:47.206 "results": [ 00:09:47.206 { 00:09:47.206 "job": "raid_bdev1", 00:09:47.206 "core_mask": "0x1", 00:09:47.206 "workload": "randrw", 00:09:47.206 "percentage": 50, 00:09:47.206 "status": "finished", 00:09:47.206 "queue_depth": 1, 00:09:47.206 "io_size": 131072, 00:09:47.206 "runtime": 1.339328, 00:09:47.206 "iops": 16185.728962584222, 00:09:47.206 "mibps": 2023.2161203230278, 00:09:47.206 "io_failed": 0, 00:09:47.206 "io_timeout": 0, 00:09:47.206 "avg_latency_us": 58.890635506345156, 00:09:47.206 "min_latency_us": 23.58777292576419, 00:09:47.206 "max_latency_us": 1781.4917030567685 00:09:47.206 } 00:09:47.206 ], 00:09:47.206 "core_count": 1 00:09:47.206 } 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63956 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63956 ']' 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63956 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63956 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63956' 00:09:47.206 killing process with pid 63956 00:09:47.206 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63956 00:09:47.466 [2024-10-11 09:43:31.837186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.466 09:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63956 00:09:47.466 [2024-10-11 09:43:31.970914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7XrFg6n3KA 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:48.846 00:09:48.846 real 0m4.375s 00:09:48.846 user 0m5.250s 00:09:48.846 sys 0m0.552s 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.846 09:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.846 ************************************ 00:09:48.846 END TEST raid_read_error_test 00:09:48.846 ************************************ 00:09:48.846 09:43:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:48.846 09:43:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.846 09:43:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.846 09:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.846 ************************************ 00:09:48.846 START TEST raid_write_error_test 00:09:48.846 ************************************ 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PmGf0sbrlt 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64102 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64102 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 64102 ']' 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.846 09:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.846 [2024-10-11 09:43:33.356142] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:48.846 [2024-10-11 09:43:33.356283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64102 ] 00:09:49.106 [2024-10-11 09:43:33.521639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.106 [2024-10-11 09:43:33.654548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.366 [2024-10-11 09:43:33.885267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.366 [2024-10-11 09:43:33.885343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 BaseBdev1_malloc 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 true 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 [2024-10-11 09:43:34.336462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.936 [2024-10-11 09:43:34.336551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.936 [2024-10-11 09:43:34.336580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.936 [2024-10-11 09:43:34.336593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.936 [2024-10-11 09:43:34.339244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.936 [2024-10-11 09:43:34.339299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.936 BaseBdev1 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 BaseBdev2_malloc 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 true 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 [2024-10-11 09:43:34.407167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.936 [2024-10-11 09:43:34.407221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.936 [2024-10-11 09:43:34.407237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.936 [2024-10-11 09:43:34.407247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.936 [2024-10-11 09:43:34.409477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.936 [2024-10-11 09:43:34.409517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.936 BaseBdev2 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.937 [2024-10-11 09:43:34.419205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.937 [2024-10-11 09:43:34.421171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.937 [2024-10-11 09:43:34.421365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.937 [2024-10-11 09:43:34.421381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.937 [2024-10-11 09:43:34.421614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:49.937 [2024-10-11 09:43:34.421802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.937 [2024-10-11 09:43:34.421814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.937 [2024-10-11 09:43:34.421963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.937 "name": "raid_bdev1", 00:09:49.937 "uuid": "d137ca17-7317-4f1f-a741-d291604050d2", 00:09:49.937 "strip_size_kb": 0, 00:09:49.937 "state": "online", 00:09:49.937 "raid_level": "raid1", 00:09:49.937 "superblock": true, 00:09:49.937 "num_base_bdevs": 2, 00:09:49.937 "num_base_bdevs_discovered": 2, 00:09:49.937 "num_base_bdevs_operational": 2, 00:09:49.937 "base_bdevs_list": [ 00:09:49.937 { 00:09:49.937 "name": "BaseBdev1", 00:09:49.937 "uuid": "823d9a1c-ef76-565d-8bf7-0c4b45be5579", 00:09:49.937 "is_configured": true, 00:09:49.937 "data_offset": 2048, 00:09:49.937 "data_size": 63488 00:09:49.937 }, 00:09:49.937 { 00:09:49.937 "name": "BaseBdev2", 00:09:49.937 "uuid": "c40e3ab7-881d-5b38-b1b3-83b1c19cd004", 00:09:49.937 "is_configured": true, 00:09:49.937 "data_offset": 2048, 00:09:49.937 "data_size": 63488 00:09:49.937 } 00:09:49.937 ] 00:09:49.937 }' 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.937 09:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.504 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.504 09:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.504 [2024-10-11 09:43:35.007811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.439 [2024-10-11 09:43:35.920170] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:51.439 [2024-10-11 09:43:35.920334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.439 [2024-10-11 09:43:35.920580] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.439 "name": "raid_bdev1", 00:09:51.439 "uuid": "d137ca17-7317-4f1f-a741-d291604050d2", 00:09:51.439 "strip_size_kb": 0, 00:09:51.439 "state": "online", 00:09:51.439 "raid_level": "raid1", 00:09:51.439 "superblock": true, 00:09:51.439 "num_base_bdevs": 2, 00:09:51.439 "num_base_bdevs_discovered": 1, 00:09:51.439 "num_base_bdevs_operational": 1, 00:09:51.439 "base_bdevs_list": [ 00:09:51.439 { 00:09:51.439 "name": null, 00:09:51.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.439 "is_configured": false, 00:09:51.439 "data_offset": 0, 00:09:51.439 "data_size": 63488 00:09:51.439 }, 00:09:51.439 { 00:09:51.439 "name": "BaseBdev2", 00:09:51.439 "uuid": "c40e3ab7-881d-5b38-b1b3-83b1c19cd004", 00:09:51.439 "is_configured": true, 00:09:51.439 "data_offset": 2048, 00:09:51.439 "data_size": 63488 00:09:51.439 } 00:09:51.439 ] 00:09:51.439 }' 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.439 09:43:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.005 [2024-10-11 09:43:36.425658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.005 [2024-10-11 09:43:36.425788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.005 [2024-10-11 09:43:36.428960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.005 [2024-10-11 09:43:36.429050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.005 [2024-10-11 09:43:36.429126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.005 [2024-10-11 09:43:36.429137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:52.005 { 00:09:52.005 "results": [ 00:09:52.005 { 00:09:52.005 "job": "raid_bdev1", 00:09:52.005 "core_mask": "0x1", 00:09:52.005 "workload": "randrw", 00:09:52.005 "percentage": 50, 00:09:52.005 "status": "finished", 00:09:52.005 "queue_depth": 1, 00:09:52.005 "io_size": 131072, 00:09:52.005 "runtime": 1.41858, 00:09:52.005 "iops": 18765.244117356793, 00:09:52.005 "mibps": 2345.655514669599, 00:09:52.005 "io_failed": 0, 00:09:52.005 "io_timeout": 0, 00:09:52.005 "avg_latency_us": 50.32593099058724, 00:09:52.005 "min_latency_us": 23.811353711790392, 00:09:52.005 "max_latency_us": 1430.9170305676855 00:09:52.005 } 00:09:52.005 ], 00:09:52.005 "core_count": 1 00:09:52.005 } 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64102 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 64102 ']' 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 64102 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64102 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64102' 00:09:52.005 killing process with pid 64102 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 64102 00:09:52.005 09:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 64102 00:09:52.005 [2024-10-11 09:43:36.478397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.005 [2024-10-11 09:43:36.611742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PmGf0sbrlt 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:53.379 ************************************ 00:09:53.379 END TEST raid_write_error_test 00:09:53.379 ************************************ 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:53.379 00:09:53.379 real 0m4.628s 00:09:53.379 user 0m5.628s 00:09:53.379 sys 0m0.576s 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.379 09:43:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.379 09:43:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:53.379 09:43:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:53.379 09:43:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:53.379 09:43:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:53.379 09:43:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.379 09:43:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.379 ************************************ 00:09:53.379 START TEST raid_state_function_test 00:09:53.379 ************************************ 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64245 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64245' 00:09:53.379 Process raid pid: 64245 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64245 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 64245 ']' 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.379 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.380 09:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.638 [2024-10-11 09:43:38.037778] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:53.638 [2024-10-11 09:43:38.037972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.638 [2024-10-11 09:43:38.204003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.896 [2024-10-11 09:43:38.340543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.154 [2024-10-11 09:43:38.579613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.154 [2024-10-11 09:43:38.579666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.413 [2024-10-11 09:43:38.966953] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.413 [2024-10-11 09:43:38.967019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.413 [2024-10-11 09:43:38.967031] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.413 [2024-10-11 09:43:38.967041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.413 [2024-10-11 09:43:38.967055] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.413 [2024-10-11 09:43:38.967066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.413 09:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.413 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.413 "name": "Existed_Raid", 00:09:54.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.413 "strip_size_kb": 64, 00:09:54.413 "state": "configuring", 00:09:54.413 "raid_level": "raid0", 00:09:54.413 "superblock": false, 00:09:54.413 "num_base_bdevs": 3, 00:09:54.413 "num_base_bdevs_discovered": 0, 00:09:54.413 "num_base_bdevs_operational": 3, 00:09:54.413 "base_bdevs_list": [ 00:09:54.413 { 00:09:54.413 "name": "BaseBdev1", 00:09:54.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.413 "is_configured": false, 00:09:54.413 "data_offset": 0, 00:09:54.413 "data_size": 0 00:09:54.413 }, 00:09:54.413 { 00:09:54.413 "name": "BaseBdev2", 00:09:54.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.413 "is_configured": false, 00:09:54.413 "data_offset": 0, 00:09:54.414 "data_size": 0 00:09:54.414 }, 00:09:54.414 { 00:09:54.414 "name": "BaseBdev3", 00:09:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.414 "is_configured": false, 00:09:54.414 "data_offset": 0, 00:09:54.414 "data_size": 0 00:09:54.414 } 00:09:54.414 ] 00:09:54.414 }' 00:09:54.414 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.414 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 [2024-10-11 09:43:39.442167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.981 [2024-10-11 09:43:39.442271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 [2024-10-11 09:43:39.454162] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.981 [2024-10-11 09:43:39.454253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.981 [2024-10-11 09:43:39.454286] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.981 [2024-10-11 09:43:39.454312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.981 [2024-10-11 09:43:39.454333] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.981 [2024-10-11 09:43:39.454357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 [2024-10-11 09:43:39.509022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.981 BaseBdev1 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 [ 00:09:54.981 { 00:09:54.981 "name": "BaseBdev1", 00:09:54.981 "aliases": [ 00:09:54.981 "97e95b9c-33f9-4d59-8948-b54411711dec" 00:09:54.981 ], 00:09:54.981 "product_name": "Malloc disk", 00:09:54.981 "block_size": 512, 00:09:54.981 "num_blocks": 65536, 00:09:54.981 "uuid": "97e95b9c-33f9-4d59-8948-b54411711dec", 00:09:54.981 "assigned_rate_limits": { 00:09:54.981 "rw_ios_per_sec": 0, 00:09:54.981 "rw_mbytes_per_sec": 0, 00:09:54.981 "r_mbytes_per_sec": 0, 00:09:54.981 "w_mbytes_per_sec": 0 00:09:54.981 }, 00:09:54.981 "claimed": true, 00:09:54.981 "claim_type": "exclusive_write", 00:09:54.981 "zoned": false, 00:09:54.981 "supported_io_types": { 00:09:54.981 "read": true, 00:09:54.981 "write": true, 00:09:54.981 "unmap": true, 00:09:54.981 "flush": true, 00:09:54.981 "reset": true, 00:09:54.981 "nvme_admin": false, 00:09:54.981 "nvme_io": false, 00:09:54.981 "nvme_io_md": false, 00:09:54.981 "write_zeroes": true, 00:09:54.981 "zcopy": true, 00:09:54.981 "get_zone_info": false, 00:09:54.981 "zone_management": false, 00:09:54.981 "zone_append": false, 00:09:54.981 "compare": false, 00:09:54.981 "compare_and_write": false, 00:09:54.981 "abort": true, 00:09:54.981 "seek_hole": false, 00:09:54.981 "seek_data": false, 00:09:54.981 "copy": true, 00:09:54.981 "nvme_iov_md": false 00:09:54.981 }, 00:09:54.981 "memory_domains": [ 00:09:54.981 { 00:09:54.981 "dma_device_id": "system", 00:09:54.981 "dma_device_type": 1 00:09:54.981 }, 00:09:54.981 { 00:09:54.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.981 "dma_device_type": 2 00:09:54.981 } 00:09:54.981 ], 00:09:54.981 "driver_specific": {} 00:09:54.981 } 00:09:54.981 ] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.981 "name": "Existed_Raid", 00:09:54.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.981 "strip_size_kb": 64, 00:09:54.981 "state": "configuring", 00:09:54.981 "raid_level": "raid0", 00:09:54.981 "superblock": false, 00:09:54.981 "num_base_bdevs": 3, 00:09:54.981 "num_base_bdevs_discovered": 1, 00:09:54.981 "num_base_bdevs_operational": 3, 00:09:54.981 "base_bdevs_list": [ 00:09:54.981 { 00:09:54.981 "name": "BaseBdev1", 00:09:54.981 "uuid": "97e95b9c-33f9-4d59-8948-b54411711dec", 00:09:54.981 "is_configured": true, 00:09:54.981 "data_offset": 0, 00:09:54.981 "data_size": 65536 00:09:54.981 }, 00:09:54.981 { 00:09:54.981 "name": "BaseBdev2", 00:09:54.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.981 "is_configured": false, 00:09:54.981 "data_offset": 0, 00:09:54.981 "data_size": 0 00:09:54.981 }, 00:09:54.981 { 00:09:54.981 "name": "BaseBdev3", 00:09:54.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.981 "is_configured": false, 00:09:54.981 "data_offset": 0, 00:09:54.981 "data_size": 0 00:09:54.981 } 00:09:54.981 ] 00:09:54.981 }' 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.981 09:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.548 [2024-10-11 09:43:40.036196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.548 [2024-10-11 09:43:40.036322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.548 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.549 [2024-10-11 09:43:40.048213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.549 [2024-10-11 09:43:40.050101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.549 [2024-10-11 09:43:40.050190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.549 [2024-10-11 09:43:40.050205] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.549 [2024-10-11 09:43:40.050216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.549 "name": "Existed_Raid", 00:09:55.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.549 "strip_size_kb": 64, 00:09:55.549 "state": "configuring", 00:09:55.549 "raid_level": "raid0", 00:09:55.549 "superblock": false, 00:09:55.549 "num_base_bdevs": 3, 00:09:55.549 "num_base_bdevs_discovered": 1, 00:09:55.549 "num_base_bdevs_operational": 3, 00:09:55.549 "base_bdevs_list": [ 00:09:55.549 { 00:09:55.549 "name": "BaseBdev1", 00:09:55.549 "uuid": "97e95b9c-33f9-4d59-8948-b54411711dec", 00:09:55.549 "is_configured": true, 00:09:55.549 "data_offset": 0, 00:09:55.549 "data_size": 65536 00:09:55.549 }, 00:09:55.549 { 00:09:55.549 "name": "BaseBdev2", 00:09:55.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.549 "is_configured": false, 00:09:55.549 "data_offset": 0, 00:09:55.549 "data_size": 0 00:09:55.549 }, 00:09:55.549 { 00:09:55.549 "name": "BaseBdev3", 00:09:55.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.549 "is_configured": false, 00:09:55.549 "data_offset": 0, 00:09:55.549 "data_size": 0 00:09:55.549 } 00:09:55.549 ] 00:09:55.549 }' 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.549 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.117 [2024-10-11 09:43:40.568415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.117 BaseBdev2 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.117 [ 00:09:56.117 { 00:09:56.117 "name": "BaseBdev2", 00:09:56.117 "aliases": [ 00:09:56.117 "c214e93e-5721-403f-8b1b-be39a41ee900" 00:09:56.117 ], 00:09:56.117 "product_name": "Malloc disk", 00:09:56.117 "block_size": 512, 00:09:56.117 "num_blocks": 65536, 00:09:56.117 "uuid": "c214e93e-5721-403f-8b1b-be39a41ee900", 00:09:56.117 "assigned_rate_limits": { 00:09:56.117 "rw_ios_per_sec": 0, 00:09:56.117 "rw_mbytes_per_sec": 0, 00:09:56.117 "r_mbytes_per_sec": 0, 00:09:56.117 "w_mbytes_per_sec": 0 00:09:56.117 }, 00:09:56.117 "claimed": true, 00:09:56.117 "claim_type": "exclusive_write", 00:09:56.117 "zoned": false, 00:09:56.117 "supported_io_types": { 00:09:56.117 "read": true, 00:09:56.117 "write": true, 00:09:56.117 "unmap": true, 00:09:56.117 "flush": true, 00:09:56.117 "reset": true, 00:09:56.117 "nvme_admin": false, 00:09:56.117 "nvme_io": false, 00:09:56.117 "nvme_io_md": false, 00:09:56.117 "write_zeroes": true, 00:09:56.117 "zcopy": true, 00:09:56.117 "get_zone_info": false, 00:09:56.117 "zone_management": false, 00:09:56.117 "zone_append": false, 00:09:56.117 "compare": false, 00:09:56.117 "compare_and_write": false, 00:09:56.117 "abort": true, 00:09:56.117 "seek_hole": false, 00:09:56.117 "seek_data": false, 00:09:56.117 "copy": true, 00:09:56.117 "nvme_iov_md": false 00:09:56.117 }, 00:09:56.117 "memory_domains": [ 00:09:56.117 { 00:09:56.117 "dma_device_id": "system", 00:09:56.117 "dma_device_type": 1 00:09:56.117 }, 00:09:56.117 { 00:09:56.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.117 "dma_device_type": 2 00:09:56.117 } 00:09:56.117 ], 00:09:56.117 "driver_specific": {} 00:09:56.117 } 00:09:56.117 ] 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.117 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.118 "name": "Existed_Raid", 00:09:56.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.118 "strip_size_kb": 64, 00:09:56.118 "state": "configuring", 00:09:56.118 "raid_level": "raid0", 00:09:56.118 "superblock": false, 00:09:56.118 "num_base_bdevs": 3, 00:09:56.118 "num_base_bdevs_discovered": 2, 00:09:56.118 "num_base_bdevs_operational": 3, 00:09:56.118 "base_bdevs_list": [ 00:09:56.118 { 00:09:56.118 "name": "BaseBdev1", 00:09:56.118 "uuid": "97e95b9c-33f9-4d59-8948-b54411711dec", 00:09:56.118 "is_configured": true, 00:09:56.118 "data_offset": 0, 00:09:56.118 "data_size": 65536 00:09:56.118 }, 00:09:56.118 { 00:09:56.118 "name": "BaseBdev2", 00:09:56.118 "uuid": "c214e93e-5721-403f-8b1b-be39a41ee900", 00:09:56.118 "is_configured": true, 00:09:56.118 "data_offset": 0, 00:09:56.118 "data_size": 65536 00:09:56.118 }, 00:09:56.118 { 00:09:56.118 "name": "BaseBdev3", 00:09:56.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.118 "is_configured": false, 00:09:56.118 "data_offset": 0, 00:09:56.118 "data_size": 0 00:09:56.118 } 00:09:56.118 ] 00:09:56.118 }' 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.118 09:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 [2024-10-11 09:43:41.102335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.687 [2024-10-11 09:43:41.102437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.687 [2024-10-11 09:43:41.102472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:56.687 [2024-10-11 09:43:41.102960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:56.687 [2024-10-11 09:43:41.103198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.687 [2024-10-11 09:43:41.103245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:56.687 [2024-10-11 09:43:41.103600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.687 BaseBdev3 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 [ 00:09:56.687 { 00:09:56.687 "name": "BaseBdev3", 00:09:56.687 "aliases": [ 00:09:56.687 "7e686459-bdcf-4e69-8cb3-f03d6365cce1" 00:09:56.687 ], 00:09:56.687 "product_name": "Malloc disk", 00:09:56.687 "block_size": 512, 00:09:56.687 "num_blocks": 65536, 00:09:56.687 "uuid": "7e686459-bdcf-4e69-8cb3-f03d6365cce1", 00:09:56.687 "assigned_rate_limits": { 00:09:56.687 "rw_ios_per_sec": 0, 00:09:56.687 "rw_mbytes_per_sec": 0, 00:09:56.687 "r_mbytes_per_sec": 0, 00:09:56.687 "w_mbytes_per_sec": 0 00:09:56.687 }, 00:09:56.687 "claimed": true, 00:09:56.687 "claim_type": "exclusive_write", 00:09:56.687 "zoned": false, 00:09:56.687 "supported_io_types": { 00:09:56.687 "read": true, 00:09:56.687 "write": true, 00:09:56.687 "unmap": true, 00:09:56.687 "flush": true, 00:09:56.687 "reset": true, 00:09:56.687 "nvme_admin": false, 00:09:56.687 "nvme_io": false, 00:09:56.687 "nvme_io_md": false, 00:09:56.687 "write_zeroes": true, 00:09:56.687 "zcopy": true, 00:09:56.687 "get_zone_info": false, 00:09:56.687 "zone_management": false, 00:09:56.687 "zone_append": false, 00:09:56.687 "compare": false, 00:09:56.687 "compare_and_write": false, 00:09:56.687 "abort": true, 00:09:56.687 "seek_hole": false, 00:09:56.687 "seek_data": false, 00:09:56.687 "copy": true, 00:09:56.687 "nvme_iov_md": false 00:09:56.687 }, 00:09:56.687 "memory_domains": [ 00:09:56.687 { 00:09:56.687 "dma_device_id": "system", 00:09:56.687 "dma_device_type": 1 00:09:56.687 }, 00:09:56.687 { 00:09:56.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.687 "dma_device_type": 2 00:09:56.687 } 00:09:56.687 ], 00:09:56.687 "driver_specific": {} 00:09:56.687 } 00:09:56.687 ] 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.687 "name": "Existed_Raid", 00:09:56.687 "uuid": "d77d2428-0de2-43c6-a053-ad0f9dc769d8", 00:09:56.687 "strip_size_kb": 64, 00:09:56.687 "state": "online", 00:09:56.687 "raid_level": "raid0", 00:09:56.687 "superblock": false, 00:09:56.687 "num_base_bdevs": 3, 00:09:56.687 "num_base_bdevs_discovered": 3, 00:09:56.687 "num_base_bdevs_operational": 3, 00:09:56.687 "base_bdevs_list": [ 00:09:56.687 { 00:09:56.687 "name": "BaseBdev1", 00:09:56.687 "uuid": "97e95b9c-33f9-4d59-8948-b54411711dec", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 0, 00:09:56.687 "data_size": 65536 00:09:56.687 }, 00:09:56.687 { 00:09:56.687 "name": "BaseBdev2", 00:09:56.687 "uuid": "c214e93e-5721-403f-8b1b-be39a41ee900", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 0, 00:09:56.687 "data_size": 65536 00:09:56.687 }, 00:09:56.687 { 00:09:56.687 "name": "BaseBdev3", 00:09:56.687 "uuid": "7e686459-bdcf-4e69-8cb3-f03d6365cce1", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 0, 00:09:56.687 "data_size": 65536 00:09:56.687 } 00:09:56.687 ] 00:09:56.687 }' 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.687 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.256 [2024-10-11 09:43:41.629891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.256 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.256 "name": "Existed_Raid", 00:09:57.256 "aliases": [ 00:09:57.256 "d77d2428-0de2-43c6-a053-ad0f9dc769d8" 00:09:57.256 ], 00:09:57.256 "product_name": "Raid Volume", 00:09:57.256 "block_size": 512, 00:09:57.256 "num_blocks": 196608, 00:09:57.256 "uuid": "d77d2428-0de2-43c6-a053-ad0f9dc769d8", 00:09:57.256 "assigned_rate_limits": { 00:09:57.256 "rw_ios_per_sec": 0, 00:09:57.256 "rw_mbytes_per_sec": 0, 00:09:57.256 "r_mbytes_per_sec": 0, 00:09:57.256 "w_mbytes_per_sec": 0 00:09:57.256 }, 00:09:57.256 "claimed": false, 00:09:57.256 "zoned": false, 00:09:57.256 "supported_io_types": { 00:09:57.256 "read": true, 00:09:57.256 "write": true, 00:09:57.256 "unmap": true, 00:09:57.256 "flush": true, 00:09:57.256 "reset": true, 00:09:57.256 "nvme_admin": false, 00:09:57.256 "nvme_io": false, 00:09:57.256 "nvme_io_md": false, 00:09:57.256 "write_zeroes": true, 00:09:57.256 "zcopy": false, 00:09:57.256 "get_zone_info": false, 00:09:57.256 "zone_management": false, 00:09:57.256 "zone_append": false, 00:09:57.256 "compare": false, 00:09:57.256 "compare_and_write": false, 00:09:57.256 "abort": false, 00:09:57.256 "seek_hole": false, 00:09:57.256 "seek_data": false, 00:09:57.256 "copy": false, 00:09:57.256 "nvme_iov_md": false 00:09:57.256 }, 00:09:57.257 "memory_domains": [ 00:09:57.257 { 00:09:57.257 "dma_device_id": "system", 00:09:57.257 "dma_device_type": 1 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.257 "dma_device_type": 2 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "dma_device_id": "system", 00:09:57.257 "dma_device_type": 1 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.257 "dma_device_type": 2 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "dma_device_id": "system", 00:09:57.257 "dma_device_type": 1 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.257 "dma_device_type": 2 00:09:57.257 } 00:09:57.257 ], 00:09:57.257 "driver_specific": { 00:09:57.257 "raid": { 00:09:57.257 "uuid": "d77d2428-0de2-43c6-a053-ad0f9dc769d8", 00:09:57.257 "strip_size_kb": 64, 00:09:57.257 "state": "online", 00:09:57.257 "raid_level": "raid0", 00:09:57.257 "superblock": false, 00:09:57.257 "num_base_bdevs": 3, 00:09:57.257 "num_base_bdevs_discovered": 3, 00:09:57.257 "num_base_bdevs_operational": 3, 00:09:57.257 "base_bdevs_list": [ 00:09:57.257 { 00:09:57.257 "name": "BaseBdev1", 00:09:57.257 "uuid": "97e95b9c-33f9-4d59-8948-b54411711dec", 00:09:57.257 "is_configured": true, 00:09:57.257 "data_offset": 0, 00:09:57.257 "data_size": 65536 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "name": "BaseBdev2", 00:09:57.257 "uuid": "c214e93e-5721-403f-8b1b-be39a41ee900", 00:09:57.257 "is_configured": true, 00:09:57.257 "data_offset": 0, 00:09:57.257 "data_size": 65536 00:09:57.257 }, 00:09:57.257 { 00:09:57.257 "name": "BaseBdev3", 00:09:57.257 "uuid": "7e686459-bdcf-4e69-8cb3-f03d6365cce1", 00:09:57.257 "is_configured": true, 00:09:57.257 "data_offset": 0, 00:09:57.257 "data_size": 65536 00:09:57.257 } 00:09:57.257 ] 00:09:57.257 } 00:09:57.257 } 00:09:57.257 }' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.257 BaseBdev2 00:09:57.257 BaseBdev3' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.257 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.517 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.517 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.517 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.517 09:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.517 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.517 09:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.517 [2024-10-11 09:43:41.929112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.517 [2024-10-11 09:43:41.929149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.517 [2024-10-11 09:43:41.929207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.517 "name": "Existed_Raid", 00:09:57.517 "uuid": "d77d2428-0de2-43c6-a053-ad0f9dc769d8", 00:09:57.517 "strip_size_kb": 64, 00:09:57.517 "state": "offline", 00:09:57.517 "raid_level": "raid0", 00:09:57.517 "superblock": false, 00:09:57.517 "num_base_bdevs": 3, 00:09:57.517 "num_base_bdevs_discovered": 2, 00:09:57.517 "num_base_bdevs_operational": 2, 00:09:57.517 "base_bdevs_list": [ 00:09:57.517 { 00:09:57.517 "name": null, 00:09:57.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.517 "is_configured": false, 00:09:57.517 "data_offset": 0, 00:09:57.517 "data_size": 65536 00:09:57.517 }, 00:09:57.517 { 00:09:57.517 "name": "BaseBdev2", 00:09:57.517 "uuid": "c214e93e-5721-403f-8b1b-be39a41ee900", 00:09:57.517 "is_configured": true, 00:09:57.517 "data_offset": 0, 00:09:57.517 "data_size": 65536 00:09:57.517 }, 00:09:57.517 { 00:09:57.517 "name": "BaseBdev3", 00:09:57.517 "uuid": "7e686459-bdcf-4e69-8cb3-f03d6365cce1", 00:09:57.517 "is_configured": true, 00:09:57.517 "data_offset": 0, 00:09:57.517 "data_size": 65536 00:09:57.517 } 00:09:57.517 ] 00:09:57.517 }' 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.517 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.086 [2024-10-11 09:43:42.505819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.086 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.086 [2024-10-11 09:43:42.659657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.086 [2024-10-11 09:43:42.659792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 BaseBdev2 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 [ 00:09:58.358 { 00:09:58.358 "name": "BaseBdev2", 00:09:58.358 "aliases": [ 00:09:58.358 "93bd5645-5979-46b9-aaa2-beb9d8c45e96" 00:09:58.358 ], 00:09:58.358 "product_name": "Malloc disk", 00:09:58.358 "block_size": 512, 00:09:58.358 "num_blocks": 65536, 00:09:58.358 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:09:58.358 "assigned_rate_limits": { 00:09:58.358 "rw_ios_per_sec": 0, 00:09:58.358 "rw_mbytes_per_sec": 0, 00:09:58.358 "r_mbytes_per_sec": 0, 00:09:58.358 "w_mbytes_per_sec": 0 00:09:58.358 }, 00:09:58.358 "claimed": false, 00:09:58.358 "zoned": false, 00:09:58.358 "supported_io_types": { 00:09:58.358 "read": true, 00:09:58.358 "write": true, 00:09:58.358 "unmap": true, 00:09:58.358 "flush": true, 00:09:58.358 "reset": true, 00:09:58.358 "nvme_admin": false, 00:09:58.358 "nvme_io": false, 00:09:58.358 "nvme_io_md": false, 00:09:58.358 "write_zeroes": true, 00:09:58.358 "zcopy": true, 00:09:58.358 "get_zone_info": false, 00:09:58.358 "zone_management": false, 00:09:58.358 "zone_append": false, 00:09:58.358 "compare": false, 00:09:58.358 "compare_and_write": false, 00:09:58.358 "abort": true, 00:09:58.358 "seek_hole": false, 00:09:58.358 "seek_data": false, 00:09:58.358 "copy": true, 00:09:58.358 "nvme_iov_md": false 00:09:58.358 }, 00:09:58.358 "memory_domains": [ 00:09:58.358 { 00:09:58.358 "dma_device_id": "system", 00:09:58.358 "dma_device_type": 1 00:09:58.358 }, 00:09:58.358 { 00:09:58.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.358 "dma_device_type": 2 00:09:58.358 } 00:09:58.358 ], 00:09:58.358 "driver_specific": {} 00:09:58.358 } 00:09:58.358 ] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 BaseBdev3 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 [ 00:09:58.358 { 00:09:58.358 "name": "BaseBdev3", 00:09:58.358 "aliases": [ 00:09:58.358 "67b483ce-f884-4fd4-aeb0-4ea14d2b9839" 00:09:58.358 ], 00:09:58.358 "product_name": "Malloc disk", 00:09:58.358 "block_size": 512, 00:09:58.358 "num_blocks": 65536, 00:09:58.358 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:09:58.358 "assigned_rate_limits": { 00:09:58.358 "rw_ios_per_sec": 0, 00:09:58.358 "rw_mbytes_per_sec": 0, 00:09:58.358 "r_mbytes_per_sec": 0, 00:09:58.358 "w_mbytes_per_sec": 0 00:09:58.358 }, 00:09:58.358 "claimed": false, 00:09:58.358 "zoned": false, 00:09:58.358 "supported_io_types": { 00:09:58.358 "read": true, 00:09:58.358 "write": true, 00:09:58.358 "unmap": true, 00:09:58.358 "flush": true, 00:09:58.358 "reset": true, 00:09:58.358 "nvme_admin": false, 00:09:58.358 "nvme_io": false, 00:09:58.358 "nvme_io_md": false, 00:09:58.358 "write_zeroes": true, 00:09:58.358 "zcopy": true, 00:09:58.358 "get_zone_info": false, 00:09:58.358 "zone_management": false, 00:09:58.358 "zone_append": false, 00:09:58.358 "compare": false, 00:09:58.358 "compare_and_write": false, 00:09:58.358 "abort": true, 00:09:58.358 "seek_hole": false, 00:09:58.358 "seek_data": false, 00:09:58.358 "copy": true, 00:09:58.358 "nvme_iov_md": false 00:09:58.358 }, 00:09:58.358 "memory_domains": [ 00:09:58.358 { 00:09:58.358 "dma_device_id": "system", 00:09:58.358 "dma_device_type": 1 00:09:58.358 }, 00:09:58.358 { 00:09:58.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.358 "dma_device_type": 2 00:09:58.358 } 00:09:58.358 ], 00:09:58.358 "driver_specific": {} 00:09:58.358 } 00:09:58.358 ] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.358 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.358 [2024-10-11 09:43:42.981402] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.358 [2024-10-11 09:43:42.981502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.358 [2024-10-11 09:43:42.981577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.359 [2024-10-11 09:43:42.983587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.359 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.618 09:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.618 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.618 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.618 "name": "Existed_Raid", 00:09:58.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.618 "strip_size_kb": 64, 00:09:58.618 "state": "configuring", 00:09:58.618 "raid_level": "raid0", 00:09:58.618 "superblock": false, 00:09:58.618 "num_base_bdevs": 3, 00:09:58.618 "num_base_bdevs_discovered": 2, 00:09:58.618 "num_base_bdevs_operational": 3, 00:09:58.618 "base_bdevs_list": [ 00:09:58.618 { 00:09:58.618 "name": "BaseBdev1", 00:09:58.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.618 "is_configured": false, 00:09:58.618 "data_offset": 0, 00:09:58.618 "data_size": 0 00:09:58.618 }, 00:09:58.618 { 00:09:58.618 "name": "BaseBdev2", 00:09:58.618 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:09:58.619 "is_configured": true, 00:09:58.619 "data_offset": 0, 00:09:58.619 "data_size": 65536 00:09:58.619 }, 00:09:58.619 { 00:09:58.619 "name": "BaseBdev3", 00:09:58.619 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:09:58.619 "is_configured": true, 00:09:58.619 "data_offset": 0, 00:09:58.619 "data_size": 65536 00:09:58.619 } 00:09:58.619 ] 00:09:58.619 }' 00:09:58.619 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.619 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.878 [2024-10-11 09:43:43.444597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.878 "name": "Existed_Raid", 00:09:58.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.878 "strip_size_kb": 64, 00:09:58.878 "state": "configuring", 00:09:58.878 "raid_level": "raid0", 00:09:58.878 "superblock": false, 00:09:58.878 "num_base_bdevs": 3, 00:09:58.878 "num_base_bdevs_discovered": 1, 00:09:58.878 "num_base_bdevs_operational": 3, 00:09:58.878 "base_bdevs_list": [ 00:09:58.878 { 00:09:58.878 "name": "BaseBdev1", 00:09:58.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.878 "is_configured": false, 00:09:58.878 "data_offset": 0, 00:09:58.878 "data_size": 0 00:09:58.878 }, 00:09:58.878 { 00:09:58.878 "name": null, 00:09:58.878 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:09:58.878 "is_configured": false, 00:09:58.878 "data_offset": 0, 00:09:58.878 "data_size": 65536 00:09:58.878 }, 00:09:58.878 { 00:09:58.878 "name": "BaseBdev3", 00:09:58.878 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:09:58.878 "is_configured": true, 00:09:58.878 "data_offset": 0, 00:09:58.878 "data_size": 65536 00:09:58.878 } 00:09:58.878 ] 00:09:58.878 }' 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.878 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.448 [2024-10-11 09:43:43.984877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.448 BaseBdev1 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.448 09:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.448 [ 00:09:59.448 { 00:09:59.448 "name": "BaseBdev1", 00:09:59.448 "aliases": [ 00:09:59.448 "7330b4da-e189-4555-97e4-32984d0e8350" 00:09:59.448 ], 00:09:59.448 "product_name": "Malloc disk", 00:09:59.448 "block_size": 512, 00:09:59.448 "num_blocks": 65536, 00:09:59.448 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:09:59.448 "assigned_rate_limits": { 00:09:59.448 "rw_ios_per_sec": 0, 00:09:59.448 "rw_mbytes_per_sec": 0, 00:09:59.448 "r_mbytes_per_sec": 0, 00:09:59.448 "w_mbytes_per_sec": 0 00:09:59.448 }, 00:09:59.448 "claimed": true, 00:09:59.448 "claim_type": "exclusive_write", 00:09:59.448 "zoned": false, 00:09:59.448 "supported_io_types": { 00:09:59.448 "read": true, 00:09:59.448 "write": true, 00:09:59.448 "unmap": true, 00:09:59.448 "flush": true, 00:09:59.448 "reset": true, 00:09:59.448 "nvme_admin": false, 00:09:59.448 "nvme_io": false, 00:09:59.448 "nvme_io_md": false, 00:09:59.448 "write_zeroes": true, 00:09:59.448 "zcopy": true, 00:09:59.448 "get_zone_info": false, 00:09:59.448 "zone_management": false, 00:09:59.448 "zone_append": false, 00:09:59.448 "compare": false, 00:09:59.448 "compare_and_write": false, 00:09:59.448 "abort": true, 00:09:59.448 "seek_hole": false, 00:09:59.448 "seek_data": false, 00:09:59.448 "copy": true, 00:09:59.448 "nvme_iov_md": false 00:09:59.448 }, 00:09:59.448 "memory_domains": [ 00:09:59.448 { 00:09:59.448 "dma_device_id": "system", 00:09:59.448 "dma_device_type": 1 00:09:59.448 }, 00:09:59.448 { 00:09:59.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.448 "dma_device_type": 2 00:09:59.448 } 00:09:59.448 ], 00:09:59.448 "driver_specific": {} 00:09:59.448 } 00:09:59.448 ] 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.448 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.708 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.708 "name": "Existed_Raid", 00:09:59.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.708 "strip_size_kb": 64, 00:09:59.708 "state": "configuring", 00:09:59.708 "raid_level": "raid0", 00:09:59.708 "superblock": false, 00:09:59.708 "num_base_bdevs": 3, 00:09:59.708 "num_base_bdevs_discovered": 2, 00:09:59.708 "num_base_bdevs_operational": 3, 00:09:59.708 "base_bdevs_list": [ 00:09:59.708 { 00:09:59.708 "name": "BaseBdev1", 00:09:59.708 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:09:59.708 "is_configured": true, 00:09:59.708 "data_offset": 0, 00:09:59.708 "data_size": 65536 00:09:59.708 }, 00:09:59.708 { 00:09:59.708 "name": null, 00:09:59.708 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:09:59.708 "is_configured": false, 00:09:59.708 "data_offset": 0, 00:09:59.708 "data_size": 65536 00:09:59.708 }, 00:09:59.708 { 00:09:59.708 "name": "BaseBdev3", 00:09:59.708 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:09:59.708 "is_configured": true, 00:09:59.708 "data_offset": 0, 00:09:59.708 "data_size": 65536 00:09:59.708 } 00:09:59.708 ] 00:09:59.708 }' 00:09:59.708 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.708 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 [2024-10-11 09:43:44.472103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.968 "name": "Existed_Raid", 00:09:59.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.968 "strip_size_kb": 64, 00:09:59.968 "state": "configuring", 00:09:59.968 "raid_level": "raid0", 00:09:59.968 "superblock": false, 00:09:59.968 "num_base_bdevs": 3, 00:09:59.968 "num_base_bdevs_discovered": 1, 00:09:59.968 "num_base_bdevs_operational": 3, 00:09:59.968 "base_bdevs_list": [ 00:09:59.968 { 00:09:59.968 "name": "BaseBdev1", 00:09:59.968 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:09:59.968 "is_configured": true, 00:09:59.968 "data_offset": 0, 00:09:59.968 "data_size": 65536 00:09:59.968 }, 00:09:59.968 { 00:09:59.968 "name": null, 00:09:59.968 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:09:59.968 "is_configured": false, 00:09:59.968 "data_offset": 0, 00:09:59.968 "data_size": 65536 00:09:59.968 }, 00:09:59.968 { 00:09:59.968 "name": null, 00:09:59.968 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:09:59.968 "is_configured": false, 00:09:59.968 "data_offset": 0, 00:09:59.968 "data_size": 65536 00:09:59.968 } 00:09:59.968 ] 00:09:59.968 }' 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.968 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.537 [2024-10-11 09:43:44.967441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.537 09:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.537 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.537 "name": "Existed_Raid", 00:10:00.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.537 "strip_size_kb": 64, 00:10:00.537 "state": "configuring", 00:10:00.537 "raid_level": "raid0", 00:10:00.537 "superblock": false, 00:10:00.537 "num_base_bdevs": 3, 00:10:00.537 "num_base_bdevs_discovered": 2, 00:10:00.537 "num_base_bdevs_operational": 3, 00:10:00.537 "base_bdevs_list": [ 00:10:00.537 { 00:10:00.537 "name": "BaseBdev1", 00:10:00.537 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:10:00.537 "is_configured": true, 00:10:00.537 "data_offset": 0, 00:10:00.537 "data_size": 65536 00:10:00.537 }, 00:10:00.537 { 00:10:00.537 "name": null, 00:10:00.537 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:10:00.537 "is_configured": false, 00:10:00.537 "data_offset": 0, 00:10:00.537 "data_size": 65536 00:10:00.537 }, 00:10:00.537 { 00:10:00.537 "name": "BaseBdev3", 00:10:00.537 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:10:00.537 "is_configured": true, 00:10:00.537 "data_offset": 0, 00:10:00.537 "data_size": 65536 00:10:00.537 } 00:10:00.537 ] 00:10:00.537 }' 00:10:00.537 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.537 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.108 [2024-10-11 09:43:45.482572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.108 "name": "Existed_Raid", 00:10:01.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.108 "strip_size_kb": 64, 00:10:01.108 "state": "configuring", 00:10:01.108 "raid_level": "raid0", 00:10:01.108 "superblock": false, 00:10:01.108 "num_base_bdevs": 3, 00:10:01.108 "num_base_bdevs_discovered": 1, 00:10:01.108 "num_base_bdevs_operational": 3, 00:10:01.108 "base_bdevs_list": [ 00:10:01.108 { 00:10:01.108 "name": null, 00:10:01.108 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:10:01.108 "is_configured": false, 00:10:01.108 "data_offset": 0, 00:10:01.108 "data_size": 65536 00:10:01.108 }, 00:10:01.108 { 00:10:01.108 "name": null, 00:10:01.108 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:10:01.108 "is_configured": false, 00:10:01.108 "data_offset": 0, 00:10:01.108 "data_size": 65536 00:10:01.108 }, 00:10:01.108 { 00:10:01.108 "name": "BaseBdev3", 00:10:01.108 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:10:01.108 "is_configured": true, 00:10:01.108 "data_offset": 0, 00:10:01.108 "data_size": 65536 00:10:01.108 } 00:10:01.108 ] 00:10:01.108 }' 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.108 09:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.677 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.677 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.678 [2024-10-11 09:43:46.068778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.678 "name": "Existed_Raid", 00:10:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.678 "strip_size_kb": 64, 00:10:01.678 "state": "configuring", 00:10:01.678 "raid_level": "raid0", 00:10:01.678 "superblock": false, 00:10:01.678 "num_base_bdevs": 3, 00:10:01.678 "num_base_bdevs_discovered": 2, 00:10:01.678 "num_base_bdevs_operational": 3, 00:10:01.678 "base_bdevs_list": [ 00:10:01.678 { 00:10:01.678 "name": null, 00:10:01.678 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:10:01.678 "is_configured": false, 00:10:01.678 "data_offset": 0, 00:10:01.678 "data_size": 65536 00:10:01.678 }, 00:10:01.678 { 00:10:01.678 "name": "BaseBdev2", 00:10:01.678 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:10:01.678 "is_configured": true, 00:10:01.678 "data_offset": 0, 00:10:01.678 "data_size": 65536 00:10:01.678 }, 00:10:01.678 { 00:10:01.678 "name": "BaseBdev3", 00:10:01.678 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:10:01.678 "is_configured": true, 00:10:01.678 "data_offset": 0, 00:10:01.678 "data_size": 65536 00:10:01.678 } 00:10:01.678 ] 00:10:01.678 }' 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.678 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.937 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.937 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.937 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.937 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7330b4da-e189-4555-97e4-32984d0e8350 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.196 [2024-10-11 09:43:46.654855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:02.196 [2024-10-11 09:43:46.655005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:02.196 [2024-10-11 09:43:46.655034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:02.196 [2024-10-11 09:43:46.655364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:02.196 [2024-10-11 09:43:46.655623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:02.196 [2024-10-11 09:43:46.655670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:02.196 [2024-10-11 09:43:46.656015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.196 NewBaseBdev 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.196 [ 00:10:02.196 { 00:10:02.196 "name": "NewBaseBdev", 00:10:02.196 "aliases": [ 00:10:02.196 "7330b4da-e189-4555-97e4-32984d0e8350" 00:10:02.196 ], 00:10:02.196 "product_name": "Malloc disk", 00:10:02.196 "block_size": 512, 00:10:02.196 "num_blocks": 65536, 00:10:02.196 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:10:02.196 "assigned_rate_limits": { 00:10:02.196 "rw_ios_per_sec": 0, 00:10:02.196 "rw_mbytes_per_sec": 0, 00:10:02.196 "r_mbytes_per_sec": 0, 00:10:02.196 "w_mbytes_per_sec": 0 00:10:02.196 }, 00:10:02.196 "claimed": true, 00:10:02.196 "claim_type": "exclusive_write", 00:10:02.196 "zoned": false, 00:10:02.196 "supported_io_types": { 00:10:02.196 "read": true, 00:10:02.196 "write": true, 00:10:02.196 "unmap": true, 00:10:02.196 "flush": true, 00:10:02.196 "reset": true, 00:10:02.196 "nvme_admin": false, 00:10:02.196 "nvme_io": false, 00:10:02.196 "nvme_io_md": false, 00:10:02.196 "write_zeroes": true, 00:10:02.196 "zcopy": true, 00:10:02.196 "get_zone_info": false, 00:10:02.196 "zone_management": false, 00:10:02.196 "zone_append": false, 00:10:02.196 "compare": false, 00:10:02.196 "compare_and_write": false, 00:10:02.196 "abort": true, 00:10:02.196 "seek_hole": false, 00:10:02.196 "seek_data": false, 00:10:02.196 "copy": true, 00:10:02.196 "nvme_iov_md": false 00:10:02.196 }, 00:10:02.196 "memory_domains": [ 00:10:02.196 { 00:10:02.196 "dma_device_id": "system", 00:10:02.196 "dma_device_type": 1 00:10:02.196 }, 00:10:02.196 { 00:10:02.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.196 "dma_device_type": 2 00:10:02.196 } 00:10:02.196 ], 00:10:02.196 "driver_specific": {} 00:10:02.196 } 00:10:02.196 ] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.196 "name": "Existed_Raid", 00:10:02.196 "uuid": "b80e124a-4892-4b55-8494-53faf1f2a427", 00:10:02.196 "strip_size_kb": 64, 00:10:02.196 "state": "online", 00:10:02.196 "raid_level": "raid0", 00:10:02.196 "superblock": false, 00:10:02.196 "num_base_bdevs": 3, 00:10:02.196 "num_base_bdevs_discovered": 3, 00:10:02.196 "num_base_bdevs_operational": 3, 00:10:02.196 "base_bdevs_list": [ 00:10:02.196 { 00:10:02.196 "name": "NewBaseBdev", 00:10:02.196 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:10:02.196 "is_configured": true, 00:10:02.196 "data_offset": 0, 00:10:02.196 "data_size": 65536 00:10:02.196 }, 00:10:02.196 { 00:10:02.196 "name": "BaseBdev2", 00:10:02.196 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:10:02.196 "is_configured": true, 00:10:02.196 "data_offset": 0, 00:10:02.196 "data_size": 65536 00:10:02.196 }, 00:10:02.196 { 00:10:02.196 "name": "BaseBdev3", 00:10:02.196 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:10:02.196 "is_configured": true, 00:10:02.196 "data_offset": 0, 00:10:02.196 "data_size": 65536 00:10:02.196 } 00:10:02.196 ] 00:10:02.196 }' 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.196 09:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.765 [2024-10-11 09:43:47.158409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.765 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.765 "name": "Existed_Raid", 00:10:02.765 "aliases": [ 00:10:02.765 "b80e124a-4892-4b55-8494-53faf1f2a427" 00:10:02.765 ], 00:10:02.765 "product_name": "Raid Volume", 00:10:02.765 "block_size": 512, 00:10:02.765 "num_blocks": 196608, 00:10:02.765 "uuid": "b80e124a-4892-4b55-8494-53faf1f2a427", 00:10:02.765 "assigned_rate_limits": { 00:10:02.765 "rw_ios_per_sec": 0, 00:10:02.765 "rw_mbytes_per_sec": 0, 00:10:02.765 "r_mbytes_per_sec": 0, 00:10:02.765 "w_mbytes_per_sec": 0 00:10:02.765 }, 00:10:02.765 "claimed": false, 00:10:02.765 "zoned": false, 00:10:02.765 "supported_io_types": { 00:10:02.765 "read": true, 00:10:02.765 "write": true, 00:10:02.765 "unmap": true, 00:10:02.765 "flush": true, 00:10:02.765 "reset": true, 00:10:02.765 "nvme_admin": false, 00:10:02.765 "nvme_io": false, 00:10:02.765 "nvme_io_md": false, 00:10:02.765 "write_zeroes": true, 00:10:02.765 "zcopy": false, 00:10:02.765 "get_zone_info": false, 00:10:02.765 "zone_management": false, 00:10:02.765 "zone_append": false, 00:10:02.766 "compare": false, 00:10:02.766 "compare_and_write": false, 00:10:02.766 "abort": false, 00:10:02.766 "seek_hole": false, 00:10:02.766 "seek_data": false, 00:10:02.766 "copy": false, 00:10:02.766 "nvme_iov_md": false 00:10:02.766 }, 00:10:02.766 "memory_domains": [ 00:10:02.766 { 00:10:02.766 "dma_device_id": "system", 00:10:02.766 "dma_device_type": 1 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.766 "dma_device_type": 2 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "dma_device_id": "system", 00:10:02.766 "dma_device_type": 1 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.766 "dma_device_type": 2 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "dma_device_id": "system", 00:10:02.766 "dma_device_type": 1 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.766 "dma_device_type": 2 00:10:02.766 } 00:10:02.766 ], 00:10:02.766 "driver_specific": { 00:10:02.766 "raid": { 00:10:02.766 "uuid": "b80e124a-4892-4b55-8494-53faf1f2a427", 00:10:02.766 "strip_size_kb": 64, 00:10:02.766 "state": "online", 00:10:02.766 "raid_level": "raid0", 00:10:02.766 "superblock": false, 00:10:02.766 "num_base_bdevs": 3, 00:10:02.766 "num_base_bdevs_discovered": 3, 00:10:02.766 "num_base_bdevs_operational": 3, 00:10:02.766 "base_bdevs_list": [ 00:10:02.766 { 00:10:02.766 "name": "NewBaseBdev", 00:10:02.766 "uuid": "7330b4da-e189-4555-97e4-32984d0e8350", 00:10:02.766 "is_configured": true, 00:10:02.766 "data_offset": 0, 00:10:02.766 "data_size": 65536 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "name": "BaseBdev2", 00:10:02.766 "uuid": "93bd5645-5979-46b9-aaa2-beb9d8c45e96", 00:10:02.766 "is_configured": true, 00:10:02.766 "data_offset": 0, 00:10:02.766 "data_size": 65536 00:10:02.766 }, 00:10:02.766 { 00:10:02.766 "name": "BaseBdev3", 00:10:02.766 "uuid": "67b483ce-f884-4fd4-aeb0-4ea14d2b9839", 00:10:02.766 "is_configured": true, 00:10:02.766 "data_offset": 0, 00:10:02.766 "data_size": 65536 00:10:02.766 } 00:10:02.766 ] 00:10:02.766 } 00:10:02.766 } 00:10:02.766 }' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:02.766 BaseBdev2 00:10:02.766 BaseBdev3' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.766 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.025 [2024-10-11 09:43:47.429674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.025 [2024-10-11 09:43:47.429706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.025 [2024-10-11 09:43:47.429812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.025 [2024-10-11 09:43:47.429876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.025 [2024-10-11 09:43:47.429890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64245 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 64245 ']' 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 64245 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64245 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.025 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64245' 00:10:03.025 killing process with pid 64245 00:10:03.026 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 64245 00:10:03.026 [2024-10-11 09:43:47.486321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.026 09:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 64245 00:10:03.285 [2024-10-11 09:43:47.793704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.665 09:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.665 00:10:04.665 real 0m11.019s 00:10:04.665 user 0m17.580s 00:10:04.665 sys 0m1.877s 00:10:04.666 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.666 09:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.666 ************************************ 00:10:04.666 END TEST raid_state_function_test 00:10:04.666 ************************************ 00:10:04.666 09:43:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:04.666 09:43:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:04.666 09:43:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.666 09:43:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.666 ************************************ 00:10:04.666 START TEST raid_state_function_test_sb 00:10:04.666 ************************************ 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.666 Process raid pid: 64872 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64872 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64872' 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64872 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64872 ']' 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.666 09:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.666 [2024-10-11 09:43:49.128907] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:04.666 [2024-10-11 09:43:49.129132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.666 [2024-10-11 09:43:49.295071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.925 [2024-10-11 09:43:49.426740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.185 [2024-10-11 09:43:49.668133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.185 [2024-10-11 09:43:49.668229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.474 [2024-10-11 09:43:50.011114] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.474 [2024-10-11 09:43:50.011242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.474 [2024-10-11 09:43:50.011276] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.474 [2024-10-11 09:43:50.011304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.474 [2024-10-11 09:43:50.011326] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.474 [2024-10-11 09:43:50.011368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.474 "name": "Existed_Raid", 00:10:05.474 "uuid": "29573485-ada0-4bad-99eb-05ba4067c764", 00:10:05.474 "strip_size_kb": 64, 00:10:05.474 "state": "configuring", 00:10:05.474 "raid_level": "raid0", 00:10:05.474 "superblock": true, 00:10:05.474 "num_base_bdevs": 3, 00:10:05.474 "num_base_bdevs_discovered": 0, 00:10:05.474 "num_base_bdevs_operational": 3, 00:10:05.474 "base_bdevs_list": [ 00:10:05.474 { 00:10:05.474 "name": "BaseBdev1", 00:10:05.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.474 "is_configured": false, 00:10:05.474 "data_offset": 0, 00:10:05.474 "data_size": 0 00:10:05.474 }, 00:10:05.474 { 00:10:05.474 "name": "BaseBdev2", 00:10:05.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.474 "is_configured": false, 00:10:05.474 "data_offset": 0, 00:10:05.474 "data_size": 0 00:10:05.474 }, 00:10:05.474 { 00:10:05.474 "name": "BaseBdev3", 00:10:05.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.474 "is_configured": false, 00:10:05.474 "data_offset": 0, 00:10:05.474 "data_size": 0 00:10:05.474 } 00:10:05.474 ] 00:10:05.474 }' 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.474 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 [2024-10-11 09:43:50.514211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.049 [2024-10-11 09:43:50.514327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 [2024-10-11 09:43:50.526227] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.049 [2024-10-11 09:43:50.526330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.049 [2024-10-11 09:43:50.526365] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.049 [2024-10-11 09:43:50.526394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.049 [2024-10-11 09:43:50.526448] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.049 [2024-10-11 09:43:50.526475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 [2024-10-11 09:43:50.580800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.049 BaseBdev1 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 [ 00:10:06.049 { 00:10:06.049 "name": "BaseBdev1", 00:10:06.049 "aliases": [ 00:10:06.049 "9f9a8b18-5690-458e-990c-c6558703e7dc" 00:10:06.049 ], 00:10:06.049 "product_name": "Malloc disk", 00:10:06.049 "block_size": 512, 00:10:06.049 "num_blocks": 65536, 00:10:06.049 "uuid": "9f9a8b18-5690-458e-990c-c6558703e7dc", 00:10:06.049 "assigned_rate_limits": { 00:10:06.049 "rw_ios_per_sec": 0, 00:10:06.049 "rw_mbytes_per_sec": 0, 00:10:06.049 "r_mbytes_per_sec": 0, 00:10:06.049 "w_mbytes_per_sec": 0 00:10:06.049 }, 00:10:06.049 "claimed": true, 00:10:06.049 "claim_type": "exclusive_write", 00:10:06.049 "zoned": false, 00:10:06.049 "supported_io_types": { 00:10:06.049 "read": true, 00:10:06.049 "write": true, 00:10:06.049 "unmap": true, 00:10:06.049 "flush": true, 00:10:06.049 "reset": true, 00:10:06.049 "nvme_admin": false, 00:10:06.049 "nvme_io": false, 00:10:06.049 "nvme_io_md": false, 00:10:06.049 "write_zeroes": true, 00:10:06.049 "zcopy": true, 00:10:06.049 "get_zone_info": false, 00:10:06.049 "zone_management": false, 00:10:06.049 "zone_append": false, 00:10:06.049 "compare": false, 00:10:06.049 "compare_and_write": false, 00:10:06.049 "abort": true, 00:10:06.049 "seek_hole": false, 00:10:06.049 "seek_data": false, 00:10:06.049 "copy": true, 00:10:06.049 "nvme_iov_md": false 00:10:06.049 }, 00:10:06.049 "memory_domains": [ 00:10:06.049 { 00:10:06.049 "dma_device_id": "system", 00:10:06.049 "dma_device_type": 1 00:10:06.049 }, 00:10:06.049 { 00:10:06.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.049 "dma_device_type": 2 00:10:06.049 } 00:10:06.049 ], 00:10:06.049 "driver_specific": {} 00:10:06.049 } 00:10:06.049 ] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.049 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.049 "name": "Existed_Raid", 00:10:06.049 "uuid": "35dab4b8-0926-45ef-afa9-672d870ae7fe", 00:10:06.049 "strip_size_kb": 64, 00:10:06.049 "state": "configuring", 00:10:06.049 "raid_level": "raid0", 00:10:06.049 "superblock": true, 00:10:06.049 "num_base_bdevs": 3, 00:10:06.049 "num_base_bdevs_discovered": 1, 00:10:06.049 "num_base_bdevs_operational": 3, 00:10:06.049 "base_bdevs_list": [ 00:10:06.049 { 00:10:06.049 "name": "BaseBdev1", 00:10:06.049 "uuid": "9f9a8b18-5690-458e-990c-c6558703e7dc", 00:10:06.049 "is_configured": true, 00:10:06.049 "data_offset": 2048, 00:10:06.049 "data_size": 63488 00:10:06.049 }, 00:10:06.049 { 00:10:06.049 "name": "BaseBdev2", 00:10:06.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.049 "is_configured": false, 00:10:06.049 "data_offset": 0, 00:10:06.049 "data_size": 0 00:10:06.049 }, 00:10:06.049 { 00:10:06.049 "name": "BaseBdev3", 00:10:06.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.050 "is_configured": false, 00:10:06.050 "data_offset": 0, 00:10:06.050 "data_size": 0 00:10:06.050 } 00:10:06.050 ] 00:10:06.050 }' 00:10:06.050 09:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.050 09:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 [2024-10-11 09:43:51.076007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.620 [2024-10-11 09:43:51.076136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 [2024-10-11 09:43:51.088035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.620 [2024-10-11 09:43:51.089912] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.620 [2024-10-11 09:43:51.089953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.620 [2024-10-11 09:43:51.089963] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.620 [2024-10-11 09:43:51.089972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.620 "name": "Existed_Raid", 00:10:06.620 "uuid": "d4564c90-aa64-4221-a89d-7f31526437e4", 00:10:06.620 "strip_size_kb": 64, 00:10:06.620 "state": "configuring", 00:10:06.620 "raid_level": "raid0", 00:10:06.620 "superblock": true, 00:10:06.620 "num_base_bdevs": 3, 00:10:06.620 "num_base_bdevs_discovered": 1, 00:10:06.620 "num_base_bdevs_operational": 3, 00:10:06.620 "base_bdevs_list": [ 00:10:06.620 { 00:10:06.620 "name": "BaseBdev1", 00:10:06.620 "uuid": "9f9a8b18-5690-458e-990c-c6558703e7dc", 00:10:06.620 "is_configured": true, 00:10:06.620 "data_offset": 2048, 00:10:06.620 "data_size": 63488 00:10:06.620 }, 00:10:06.620 { 00:10:06.620 "name": "BaseBdev2", 00:10:06.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.620 "is_configured": false, 00:10:06.620 "data_offset": 0, 00:10:06.620 "data_size": 0 00:10:06.620 }, 00:10:06.620 { 00:10:06.620 "name": "BaseBdev3", 00:10:06.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.620 "is_configured": false, 00:10:06.620 "data_offset": 0, 00:10:06.620 "data_size": 0 00:10:06.620 } 00:10:06.620 ] 00:10:06.620 }' 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.620 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 [2024-10-11 09:43:51.586298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.190 BaseBdev2 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.190 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.190 [ 00:10:07.190 { 00:10:07.190 "name": "BaseBdev2", 00:10:07.190 "aliases": [ 00:10:07.190 "4ac26959-b668-4ae2-9aa4-fed12ed90359" 00:10:07.190 ], 00:10:07.190 "product_name": "Malloc disk", 00:10:07.190 "block_size": 512, 00:10:07.190 "num_blocks": 65536, 00:10:07.190 "uuid": "4ac26959-b668-4ae2-9aa4-fed12ed90359", 00:10:07.190 "assigned_rate_limits": { 00:10:07.190 "rw_ios_per_sec": 0, 00:10:07.190 "rw_mbytes_per_sec": 0, 00:10:07.190 "r_mbytes_per_sec": 0, 00:10:07.190 "w_mbytes_per_sec": 0 00:10:07.190 }, 00:10:07.190 "claimed": true, 00:10:07.190 "claim_type": "exclusive_write", 00:10:07.190 "zoned": false, 00:10:07.190 "supported_io_types": { 00:10:07.190 "read": true, 00:10:07.190 "write": true, 00:10:07.190 "unmap": true, 00:10:07.190 "flush": true, 00:10:07.191 "reset": true, 00:10:07.191 "nvme_admin": false, 00:10:07.191 "nvme_io": false, 00:10:07.191 "nvme_io_md": false, 00:10:07.191 "write_zeroes": true, 00:10:07.191 "zcopy": true, 00:10:07.191 "get_zone_info": false, 00:10:07.191 "zone_management": false, 00:10:07.191 "zone_append": false, 00:10:07.191 "compare": false, 00:10:07.191 "compare_and_write": false, 00:10:07.191 "abort": true, 00:10:07.191 "seek_hole": false, 00:10:07.191 "seek_data": false, 00:10:07.191 "copy": true, 00:10:07.191 "nvme_iov_md": false 00:10:07.191 }, 00:10:07.191 "memory_domains": [ 00:10:07.191 { 00:10:07.191 "dma_device_id": "system", 00:10:07.191 "dma_device_type": 1 00:10:07.191 }, 00:10:07.191 { 00:10:07.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.191 "dma_device_type": 2 00:10:07.191 } 00:10:07.191 ], 00:10:07.191 "driver_specific": {} 00:10:07.191 } 00:10:07.191 ] 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.191 "name": "Existed_Raid", 00:10:07.191 "uuid": "d4564c90-aa64-4221-a89d-7f31526437e4", 00:10:07.191 "strip_size_kb": 64, 00:10:07.191 "state": "configuring", 00:10:07.191 "raid_level": "raid0", 00:10:07.191 "superblock": true, 00:10:07.191 "num_base_bdevs": 3, 00:10:07.191 "num_base_bdevs_discovered": 2, 00:10:07.191 "num_base_bdevs_operational": 3, 00:10:07.191 "base_bdevs_list": [ 00:10:07.191 { 00:10:07.191 "name": "BaseBdev1", 00:10:07.191 "uuid": "9f9a8b18-5690-458e-990c-c6558703e7dc", 00:10:07.191 "is_configured": true, 00:10:07.191 "data_offset": 2048, 00:10:07.191 "data_size": 63488 00:10:07.191 }, 00:10:07.191 { 00:10:07.191 "name": "BaseBdev2", 00:10:07.191 "uuid": "4ac26959-b668-4ae2-9aa4-fed12ed90359", 00:10:07.191 "is_configured": true, 00:10:07.191 "data_offset": 2048, 00:10:07.191 "data_size": 63488 00:10:07.191 }, 00:10:07.191 { 00:10:07.191 "name": "BaseBdev3", 00:10:07.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.191 "is_configured": false, 00:10:07.191 "data_offset": 0, 00:10:07.191 "data_size": 0 00:10:07.191 } 00:10:07.191 ] 00:10:07.191 }' 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.191 09:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.450 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.710 [2024-10-11 09:43:52.118862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.710 [2024-10-11 09:43:52.119204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.710 [2024-10-11 09:43:52.119267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:07.710 [2024-10-11 09:43:52.119570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.710 [2024-10-11 09:43:52.119803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.710 [2024-10-11 09:43:52.119852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:07.710 BaseBdev3 00:10:07.710 [2024-10-11 09:43:52.120049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.710 [ 00:10:07.710 { 00:10:07.710 "name": "BaseBdev3", 00:10:07.710 "aliases": [ 00:10:07.710 "bc105406-fd5d-40b6-b431-a6f66b489929" 00:10:07.710 ], 00:10:07.710 "product_name": "Malloc disk", 00:10:07.710 "block_size": 512, 00:10:07.710 "num_blocks": 65536, 00:10:07.710 "uuid": "bc105406-fd5d-40b6-b431-a6f66b489929", 00:10:07.710 "assigned_rate_limits": { 00:10:07.710 "rw_ios_per_sec": 0, 00:10:07.710 "rw_mbytes_per_sec": 0, 00:10:07.710 "r_mbytes_per_sec": 0, 00:10:07.710 "w_mbytes_per_sec": 0 00:10:07.710 }, 00:10:07.710 "claimed": true, 00:10:07.710 "claim_type": "exclusive_write", 00:10:07.710 "zoned": false, 00:10:07.710 "supported_io_types": { 00:10:07.710 "read": true, 00:10:07.710 "write": true, 00:10:07.710 "unmap": true, 00:10:07.710 "flush": true, 00:10:07.710 "reset": true, 00:10:07.710 "nvme_admin": false, 00:10:07.710 "nvme_io": false, 00:10:07.710 "nvme_io_md": false, 00:10:07.710 "write_zeroes": true, 00:10:07.710 "zcopy": true, 00:10:07.710 "get_zone_info": false, 00:10:07.710 "zone_management": false, 00:10:07.710 "zone_append": false, 00:10:07.710 "compare": false, 00:10:07.710 "compare_and_write": false, 00:10:07.710 "abort": true, 00:10:07.710 "seek_hole": false, 00:10:07.710 "seek_data": false, 00:10:07.710 "copy": true, 00:10:07.710 "nvme_iov_md": false 00:10:07.710 }, 00:10:07.710 "memory_domains": [ 00:10:07.710 { 00:10:07.710 "dma_device_id": "system", 00:10:07.710 "dma_device_type": 1 00:10:07.710 }, 00:10:07.710 { 00:10:07.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.710 "dma_device_type": 2 00:10:07.710 } 00:10:07.710 ], 00:10:07.710 "driver_specific": {} 00:10:07.710 } 00:10:07.710 ] 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.710 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.711 "name": "Existed_Raid", 00:10:07.711 "uuid": "d4564c90-aa64-4221-a89d-7f31526437e4", 00:10:07.711 "strip_size_kb": 64, 00:10:07.711 "state": "online", 00:10:07.711 "raid_level": "raid0", 00:10:07.711 "superblock": true, 00:10:07.711 "num_base_bdevs": 3, 00:10:07.711 "num_base_bdevs_discovered": 3, 00:10:07.711 "num_base_bdevs_operational": 3, 00:10:07.711 "base_bdevs_list": [ 00:10:07.711 { 00:10:07.711 "name": "BaseBdev1", 00:10:07.711 "uuid": "9f9a8b18-5690-458e-990c-c6558703e7dc", 00:10:07.711 "is_configured": true, 00:10:07.711 "data_offset": 2048, 00:10:07.711 "data_size": 63488 00:10:07.711 }, 00:10:07.711 { 00:10:07.711 "name": "BaseBdev2", 00:10:07.711 "uuid": "4ac26959-b668-4ae2-9aa4-fed12ed90359", 00:10:07.711 "is_configured": true, 00:10:07.711 "data_offset": 2048, 00:10:07.711 "data_size": 63488 00:10:07.711 }, 00:10:07.711 { 00:10:07.711 "name": "BaseBdev3", 00:10:07.711 "uuid": "bc105406-fd5d-40b6-b431-a6f66b489929", 00:10:07.711 "is_configured": true, 00:10:07.711 "data_offset": 2048, 00:10:07.711 "data_size": 63488 00:10:07.711 } 00:10:07.711 ] 00:10:07.711 }' 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.711 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.280 [2024-10-11 09:43:52.634476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.280 "name": "Existed_Raid", 00:10:08.280 "aliases": [ 00:10:08.280 "d4564c90-aa64-4221-a89d-7f31526437e4" 00:10:08.280 ], 00:10:08.280 "product_name": "Raid Volume", 00:10:08.280 "block_size": 512, 00:10:08.280 "num_blocks": 190464, 00:10:08.280 "uuid": "d4564c90-aa64-4221-a89d-7f31526437e4", 00:10:08.280 "assigned_rate_limits": { 00:10:08.280 "rw_ios_per_sec": 0, 00:10:08.280 "rw_mbytes_per_sec": 0, 00:10:08.280 "r_mbytes_per_sec": 0, 00:10:08.280 "w_mbytes_per_sec": 0 00:10:08.280 }, 00:10:08.280 "claimed": false, 00:10:08.280 "zoned": false, 00:10:08.280 "supported_io_types": { 00:10:08.280 "read": true, 00:10:08.280 "write": true, 00:10:08.280 "unmap": true, 00:10:08.280 "flush": true, 00:10:08.280 "reset": true, 00:10:08.280 "nvme_admin": false, 00:10:08.280 "nvme_io": false, 00:10:08.280 "nvme_io_md": false, 00:10:08.280 "write_zeroes": true, 00:10:08.280 "zcopy": false, 00:10:08.280 "get_zone_info": false, 00:10:08.280 "zone_management": false, 00:10:08.280 "zone_append": false, 00:10:08.280 "compare": false, 00:10:08.280 "compare_and_write": false, 00:10:08.280 "abort": false, 00:10:08.280 "seek_hole": false, 00:10:08.280 "seek_data": false, 00:10:08.280 "copy": false, 00:10:08.280 "nvme_iov_md": false 00:10:08.280 }, 00:10:08.280 "memory_domains": [ 00:10:08.280 { 00:10:08.280 "dma_device_id": "system", 00:10:08.280 "dma_device_type": 1 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.280 "dma_device_type": 2 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "dma_device_id": "system", 00:10:08.280 "dma_device_type": 1 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.280 "dma_device_type": 2 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "dma_device_id": "system", 00:10:08.280 "dma_device_type": 1 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.280 "dma_device_type": 2 00:10:08.280 } 00:10:08.280 ], 00:10:08.280 "driver_specific": { 00:10:08.280 "raid": { 00:10:08.280 "uuid": "d4564c90-aa64-4221-a89d-7f31526437e4", 00:10:08.280 "strip_size_kb": 64, 00:10:08.280 "state": "online", 00:10:08.280 "raid_level": "raid0", 00:10:08.280 "superblock": true, 00:10:08.280 "num_base_bdevs": 3, 00:10:08.280 "num_base_bdevs_discovered": 3, 00:10:08.280 "num_base_bdevs_operational": 3, 00:10:08.280 "base_bdevs_list": [ 00:10:08.280 { 00:10:08.280 "name": "BaseBdev1", 00:10:08.280 "uuid": "9f9a8b18-5690-458e-990c-c6558703e7dc", 00:10:08.280 "is_configured": true, 00:10:08.280 "data_offset": 2048, 00:10:08.280 "data_size": 63488 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "name": "BaseBdev2", 00:10:08.280 "uuid": "4ac26959-b668-4ae2-9aa4-fed12ed90359", 00:10:08.280 "is_configured": true, 00:10:08.280 "data_offset": 2048, 00:10:08.280 "data_size": 63488 00:10:08.280 }, 00:10:08.280 { 00:10:08.280 "name": "BaseBdev3", 00:10:08.280 "uuid": "bc105406-fd5d-40b6-b431-a6f66b489929", 00:10:08.280 "is_configured": true, 00:10:08.280 "data_offset": 2048, 00:10:08.280 "data_size": 63488 00:10:08.280 } 00:10:08.280 ] 00:10:08.280 } 00:10:08.280 } 00:10:08.280 }' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.280 BaseBdev2 00:10:08.280 BaseBdev3' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.280 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.281 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.281 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.281 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.281 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.541 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.541 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.541 09:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.541 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.541 09:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 [2024-10-11 09:43:52.941638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.541 [2024-10-11 09:43:52.941749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.541 [2024-10-11 09:43:52.941840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.541 "name": "Existed_Raid", 00:10:08.541 "uuid": "d4564c90-aa64-4221-a89d-7f31526437e4", 00:10:08.541 "strip_size_kb": 64, 00:10:08.541 "state": "offline", 00:10:08.541 "raid_level": "raid0", 00:10:08.541 "superblock": true, 00:10:08.541 "num_base_bdevs": 3, 00:10:08.541 "num_base_bdevs_discovered": 2, 00:10:08.541 "num_base_bdevs_operational": 2, 00:10:08.541 "base_bdevs_list": [ 00:10:08.541 { 00:10:08.541 "name": null, 00:10:08.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.541 "is_configured": false, 00:10:08.541 "data_offset": 0, 00:10:08.541 "data_size": 63488 00:10:08.541 }, 00:10:08.541 { 00:10:08.541 "name": "BaseBdev2", 00:10:08.541 "uuid": "4ac26959-b668-4ae2-9aa4-fed12ed90359", 00:10:08.541 "is_configured": true, 00:10:08.541 "data_offset": 2048, 00:10:08.541 "data_size": 63488 00:10:08.541 }, 00:10:08.541 { 00:10:08.541 "name": "BaseBdev3", 00:10:08.541 "uuid": "bc105406-fd5d-40b6-b431-a6f66b489929", 00:10:08.541 "is_configured": true, 00:10:08.541 "data_offset": 2048, 00:10:08.541 "data_size": 63488 00:10:08.541 } 00:10:08.541 ] 00:10:08.541 }' 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.541 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.129 [2024-10-11 09:43:53.569775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.129 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.129 [2024-10-11 09:43:53.730768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.129 [2024-10-11 09:43:53.730888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.391 BaseBdev2 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.391 [ 00:10:09.391 { 00:10:09.391 "name": "BaseBdev2", 00:10:09.391 "aliases": [ 00:10:09.391 "06cdca40-915c-4bda-8e7f-f56681ec439a" 00:10:09.391 ], 00:10:09.391 "product_name": "Malloc disk", 00:10:09.391 "block_size": 512, 00:10:09.391 "num_blocks": 65536, 00:10:09.391 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:09.391 "assigned_rate_limits": { 00:10:09.391 "rw_ios_per_sec": 0, 00:10:09.391 "rw_mbytes_per_sec": 0, 00:10:09.391 "r_mbytes_per_sec": 0, 00:10:09.391 "w_mbytes_per_sec": 0 00:10:09.391 }, 00:10:09.391 "claimed": false, 00:10:09.391 "zoned": false, 00:10:09.391 "supported_io_types": { 00:10:09.391 "read": true, 00:10:09.391 "write": true, 00:10:09.391 "unmap": true, 00:10:09.391 "flush": true, 00:10:09.391 "reset": true, 00:10:09.391 "nvme_admin": false, 00:10:09.391 "nvme_io": false, 00:10:09.391 "nvme_io_md": false, 00:10:09.391 "write_zeroes": true, 00:10:09.391 "zcopy": true, 00:10:09.391 "get_zone_info": false, 00:10:09.391 "zone_management": false, 00:10:09.391 "zone_append": false, 00:10:09.391 "compare": false, 00:10:09.391 "compare_and_write": false, 00:10:09.391 "abort": true, 00:10:09.391 "seek_hole": false, 00:10:09.391 "seek_data": false, 00:10:09.391 "copy": true, 00:10:09.391 "nvme_iov_md": false 00:10:09.391 }, 00:10:09.391 "memory_domains": [ 00:10:09.391 { 00:10:09.391 "dma_device_id": "system", 00:10:09.391 "dma_device_type": 1 00:10:09.391 }, 00:10:09.391 { 00:10:09.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.391 "dma_device_type": 2 00:10:09.391 } 00:10:09.391 ], 00:10:09.391 "driver_specific": {} 00:10:09.391 } 00:10:09.391 ] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.391 09:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.650 BaseBdev3 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.650 [ 00:10:09.650 { 00:10:09.650 "name": "BaseBdev3", 00:10:09.650 "aliases": [ 00:10:09.650 "97d284b3-225b-4bee-b02e-10208b1cff67" 00:10:09.650 ], 00:10:09.650 "product_name": "Malloc disk", 00:10:09.650 "block_size": 512, 00:10:09.650 "num_blocks": 65536, 00:10:09.650 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:09.650 "assigned_rate_limits": { 00:10:09.650 "rw_ios_per_sec": 0, 00:10:09.650 "rw_mbytes_per_sec": 0, 00:10:09.650 "r_mbytes_per_sec": 0, 00:10:09.650 "w_mbytes_per_sec": 0 00:10:09.650 }, 00:10:09.650 "claimed": false, 00:10:09.650 "zoned": false, 00:10:09.650 "supported_io_types": { 00:10:09.650 "read": true, 00:10:09.650 "write": true, 00:10:09.650 "unmap": true, 00:10:09.650 "flush": true, 00:10:09.650 "reset": true, 00:10:09.650 "nvme_admin": false, 00:10:09.650 "nvme_io": false, 00:10:09.650 "nvme_io_md": false, 00:10:09.650 "write_zeroes": true, 00:10:09.650 "zcopy": true, 00:10:09.650 "get_zone_info": false, 00:10:09.650 "zone_management": false, 00:10:09.650 "zone_append": false, 00:10:09.650 "compare": false, 00:10:09.650 "compare_and_write": false, 00:10:09.650 "abort": true, 00:10:09.650 "seek_hole": false, 00:10:09.650 "seek_data": false, 00:10:09.650 "copy": true, 00:10:09.650 "nvme_iov_md": false 00:10:09.650 }, 00:10:09.650 "memory_domains": [ 00:10:09.650 { 00:10:09.650 "dma_device_id": "system", 00:10:09.650 "dma_device_type": 1 00:10:09.650 }, 00:10:09.650 { 00:10:09.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.650 "dma_device_type": 2 00:10:09.650 } 00:10:09.650 ], 00:10:09.650 "driver_specific": {} 00:10:09.650 } 00:10:09.650 ] 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.650 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.651 [2024-10-11 09:43:54.075682] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.651 [2024-10-11 09:43:54.075818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.651 [2024-10-11 09:43:54.075980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.651 [2024-10-11 09:43:54.078078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.651 "name": "Existed_Raid", 00:10:09.651 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:09.651 "strip_size_kb": 64, 00:10:09.651 "state": "configuring", 00:10:09.651 "raid_level": "raid0", 00:10:09.651 "superblock": true, 00:10:09.651 "num_base_bdevs": 3, 00:10:09.651 "num_base_bdevs_discovered": 2, 00:10:09.651 "num_base_bdevs_operational": 3, 00:10:09.651 "base_bdevs_list": [ 00:10:09.651 { 00:10:09.651 "name": "BaseBdev1", 00:10:09.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.651 "is_configured": false, 00:10:09.651 "data_offset": 0, 00:10:09.651 "data_size": 0 00:10:09.651 }, 00:10:09.651 { 00:10:09.651 "name": "BaseBdev2", 00:10:09.651 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:09.651 "is_configured": true, 00:10:09.651 "data_offset": 2048, 00:10:09.651 "data_size": 63488 00:10:09.651 }, 00:10:09.651 { 00:10:09.651 "name": "BaseBdev3", 00:10:09.651 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:09.651 "is_configured": true, 00:10:09.651 "data_offset": 2048, 00:10:09.651 "data_size": 63488 00:10:09.651 } 00:10:09.651 ] 00:10:09.651 }' 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.651 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.217 [2024-10-11 09:43:54.570878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.217 "name": "Existed_Raid", 00:10:10.217 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:10.217 "strip_size_kb": 64, 00:10:10.217 "state": "configuring", 00:10:10.217 "raid_level": "raid0", 00:10:10.217 "superblock": true, 00:10:10.217 "num_base_bdevs": 3, 00:10:10.217 "num_base_bdevs_discovered": 1, 00:10:10.217 "num_base_bdevs_operational": 3, 00:10:10.217 "base_bdevs_list": [ 00:10:10.217 { 00:10:10.217 "name": "BaseBdev1", 00:10:10.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.217 "is_configured": false, 00:10:10.217 "data_offset": 0, 00:10:10.217 "data_size": 0 00:10:10.217 }, 00:10:10.217 { 00:10:10.217 "name": null, 00:10:10.217 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:10.217 "is_configured": false, 00:10:10.217 "data_offset": 0, 00:10:10.217 "data_size": 63488 00:10:10.217 }, 00:10:10.217 { 00:10:10.217 "name": "BaseBdev3", 00:10:10.217 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:10.217 "is_configured": true, 00:10:10.217 "data_offset": 2048, 00:10:10.217 "data_size": 63488 00:10:10.217 } 00:10:10.217 ] 00:10:10.217 }' 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.217 09:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.476 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.735 [2024-10-11 09:43:55.135392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.735 BaseBdev1 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.735 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.735 [ 00:10:10.735 { 00:10:10.735 "name": "BaseBdev1", 00:10:10.735 "aliases": [ 00:10:10.735 "f779a9c8-c911-41fa-9366-cafa0be056a4" 00:10:10.735 ], 00:10:10.735 "product_name": "Malloc disk", 00:10:10.735 "block_size": 512, 00:10:10.735 "num_blocks": 65536, 00:10:10.735 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:10.735 "assigned_rate_limits": { 00:10:10.735 "rw_ios_per_sec": 0, 00:10:10.735 "rw_mbytes_per_sec": 0, 00:10:10.735 "r_mbytes_per_sec": 0, 00:10:10.735 "w_mbytes_per_sec": 0 00:10:10.735 }, 00:10:10.735 "claimed": true, 00:10:10.735 "claim_type": "exclusive_write", 00:10:10.736 "zoned": false, 00:10:10.736 "supported_io_types": { 00:10:10.736 "read": true, 00:10:10.736 "write": true, 00:10:10.736 "unmap": true, 00:10:10.736 "flush": true, 00:10:10.736 "reset": true, 00:10:10.736 "nvme_admin": false, 00:10:10.736 "nvme_io": false, 00:10:10.736 "nvme_io_md": false, 00:10:10.736 "write_zeroes": true, 00:10:10.736 "zcopy": true, 00:10:10.736 "get_zone_info": false, 00:10:10.736 "zone_management": false, 00:10:10.736 "zone_append": false, 00:10:10.736 "compare": false, 00:10:10.736 "compare_and_write": false, 00:10:10.736 "abort": true, 00:10:10.736 "seek_hole": false, 00:10:10.736 "seek_data": false, 00:10:10.736 "copy": true, 00:10:10.736 "nvme_iov_md": false 00:10:10.736 }, 00:10:10.736 "memory_domains": [ 00:10:10.736 { 00:10:10.736 "dma_device_id": "system", 00:10:10.736 "dma_device_type": 1 00:10:10.736 }, 00:10:10.736 { 00:10:10.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.736 "dma_device_type": 2 00:10:10.736 } 00:10:10.736 ], 00:10:10.736 "driver_specific": {} 00:10:10.736 } 00:10:10.736 ] 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.736 "name": "Existed_Raid", 00:10:10.736 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:10.736 "strip_size_kb": 64, 00:10:10.736 "state": "configuring", 00:10:10.736 "raid_level": "raid0", 00:10:10.736 "superblock": true, 00:10:10.736 "num_base_bdevs": 3, 00:10:10.736 "num_base_bdevs_discovered": 2, 00:10:10.736 "num_base_bdevs_operational": 3, 00:10:10.736 "base_bdevs_list": [ 00:10:10.736 { 00:10:10.736 "name": "BaseBdev1", 00:10:10.736 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:10.736 "is_configured": true, 00:10:10.736 "data_offset": 2048, 00:10:10.736 "data_size": 63488 00:10:10.736 }, 00:10:10.736 { 00:10:10.736 "name": null, 00:10:10.736 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:10.736 "is_configured": false, 00:10:10.736 "data_offset": 0, 00:10:10.736 "data_size": 63488 00:10:10.736 }, 00:10:10.736 { 00:10:10.736 "name": "BaseBdev3", 00:10:10.736 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:10.736 "is_configured": true, 00:10:10.736 "data_offset": 2048, 00:10:10.736 "data_size": 63488 00:10:10.736 } 00:10:10.736 ] 00:10:10.736 }' 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.736 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.994 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.994 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.994 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.994 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.994 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.253 [2024-10-11 09:43:55.654580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.253 "name": "Existed_Raid", 00:10:11.253 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:11.253 "strip_size_kb": 64, 00:10:11.253 "state": "configuring", 00:10:11.253 "raid_level": "raid0", 00:10:11.253 "superblock": true, 00:10:11.253 "num_base_bdevs": 3, 00:10:11.253 "num_base_bdevs_discovered": 1, 00:10:11.253 "num_base_bdevs_operational": 3, 00:10:11.253 "base_bdevs_list": [ 00:10:11.253 { 00:10:11.253 "name": "BaseBdev1", 00:10:11.253 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:11.253 "is_configured": true, 00:10:11.253 "data_offset": 2048, 00:10:11.253 "data_size": 63488 00:10:11.253 }, 00:10:11.253 { 00:10:11.253 "name": null, 00:10:11.253 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:11.253 "is_configured": false, 00:10:11.253 "data_offset": 0, 00:10:11.253 "data_size": 63488 00:10:11.253 }, 00:10:11.253 { 00:10:11.253 "name": null, 00:10:11.253 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:11.253 "is_configured": false, 00:10:11.253 "data_offset": 0, 00:10:11.253 "data_size": 63488 00:10:11.253 } 00:10:11.253 ] 00:10:11.253 }' 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.253 09:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 [2024-10-11 09:43:56.201673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.820 "name": "Existed_Raid", 00:10:11.820 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:11.820 "strip_size_kb": 64, 00:10:11.820 "state": "configuring", 00:10:11.820 "raid_level": "raid0", 00:10:11.820 "superblock": true, 00:10:11.820 "num_base_bdevs": 3, 00:10:11.820 "num_base_bdevs_discovered": 2, 00:10:11.820 "num_base_bdevs_operational": 3, 00:10:11.820 "base_bdevs_list": [ 00:10:11.820 { 00:10:11.820 "name": "BaseBdev1", 00:10:11.820 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:11.820 "is_configured": true, 00:10:11.820 "data_offset": 2048, 00:10:11.820 "data_size": 63488 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "name": null, 00:10:11.820 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:11.820 "is_configured": false, 00:10:11.820 "data_offset": 0, 00:10:11.820 "data_size": 63488 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "name": "BaseBdev3", 00:10:11.820 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:11.820 "is_configured": true, 00:10:11.820 "data_offset": 2048, 00:10:11.820 "data_size": 63488 00:10:11.820 } 00:10:11.820 ] 00:10:11.820 }' 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.820 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.079 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.079 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.079 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.079 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.079 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.339 [2024-10-11 09:43:56.736782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.339 "name": "Existed_Raid", 00:10:12.339 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:12.339 "strip_size_kb": 64, 00:10:12.339 "state": "configuring", 00:10:12.339 "raid_level": "raid0", 00:10:12.339 "superblock": true, 00:10:12.339 "num_base_bdevs": 3, 00:10:12.339 "num_base_bdevs_discovered": 1, 00:10:12.339 "num_base_bdevs_operational": 3, 00:10:12.339 "base_bdevs_list": [ 00:10:12.339 { 00:10:12.339 "name": null, 00:10:12.339 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:12.339 "is_configured": false, 00:10:12.339 "data_offset": 0, 00:10:12.339 "data_size": 63488 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "name": null, 00:10:12.339 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:12.339 "is_configured": false, 00:10:12.339 "data_offset": 0, 00:10:12.339 "data_size": 63488 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "name": "BaseBdev3", 00:10:12.339 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:12.339 "is_configured": true, 00:10:12.339 "data_offset": 2048, 00:10:12.339 "data_size": 63488 00:10:12.339 } 00:10:12.339 ] 00:10:12.339 }' 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.339 09:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.909 [2024-10-11 09:43:57.305969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.909 "name": "Existed_Raid", 00:10:12.909 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:12.909 "strip_size_kb": 64, 00:10:12.909 "state": "configuring", 00:10:12.909 "raid_level": "raid0", 00:10:12.909 "superblock": true, 00:10:12.909 "num_base_bdevs": 3, 00:10:12.909 "num_base_bdevs_discovered": 2, 00:10:12.909 "num_base_bdevs_operational": 3, 00:10:12.909 "base_bdevs_list": [ 00:10:12.909 { 00:10:12.909 "name": null, 00:10:12.909 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:12.909 "is_configured": false, 00:10:12.909 "data_offset": 0, 00:10:12.909 "data_size": 63488 00:10:12.909 }, 00:10:12.909 { 00:10:12.909 "name": "BaseBdev2", 00:10:12.909 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:12.909 "is_configured": true, 00:10:12.909 "data_offset": 2048, 00:10:12.909 "data_size": 63488 00:10:12.909 }, 00:10:12.909 { 00:10:12.909 "name": "BaseBdev3", 00:10:12.909 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:12.909 "is_configured": true, 00:10:12.909 "data_offset": 2048, 00:10:12.909 "data_size": 63488 00:10:12.909 } 00:10:12.909 ] 00:10:12.909 }' 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.909 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f779a9c8-c911-41fa-9366-cafa0be056a4 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.429 [2024-10-11 09:43:57.856563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.429 [2024-10-11 09:43:57.856926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.429 [2024-10-11 09:43:57.856988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:13.429 [2024-10-11 09:43:57.857310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.429 NewBaseBdev 00:10:13.429 [2024-10-11 09:43:57.857536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.429 [2024-10-11 09:43:57.857593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:13.429 [2024-10-11 09:43:57.857789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.429 [ 00:10:13.429 { 00:10:13.429 "name": "NewBaseBdev", 00:10:13.429 "aliases": [ 00:10:13.429 "f779a9c8-c911-41fa-9366-cafa0be056a4" 00:10:13.429 ], 00:10:13.429 "product_name": "Malloc disk", 00:10:13.429 "block_size": 512, 00:10:13.429 "num_blocks": 65536, 00:10:13.429 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:13.429 "assigned_rate_limits": { 00:10:13.429 "rw_ios_per_sec": 0, 00:10:13.429 "rw_mbytes_per_sec": 0, 00:10:13.429 "r_mbytes_per_sec": 0, 00:10:13.429 "w_mbytes_per_sec": 0 00:10:13.429 }, 00:10:13.429 "claimed": true, 00:10:13.429 "claim_type": "exclusive_write", 00:10:13.429 "zoned": false, 00:10:13.429 "supported_io_types": { 00:10:13.429 "read": true, 00:10:13.429 "write": true, 00:10:13.429 "unmap": true, 00:10:13.429 "flush": true, 00:10:13.429 "reset": true, 00:10:13.429 "nvme_admin": false, 00:10:13.429 "nvme_io": false, 00:10:13.429 "nvme_io_md": false, 00:10:13.429 "write_zeroes": true, 00:10:13.429 "zcopy": true, 00:10:13.429 "get_zone_info": false, 00:10:13.429 "zone_management": false, 00:10:13.429 "zone_append": false, 00:10:13.429 "compare": false, 00:10:13.429 "compare_and_write": false, 00:10:13.429 "abort": true, 00:10:13.429 "seek_hole": false, 00:10:13.429 "seek_data": false, 00:10:13.429 "copy": true, 00:10:13.429 "nvme_iov_md": false 00:10:13.429 }, 00:10:13.429 "memory_domains": [ 00:10:13.429 { 00:10:13.429 "dma_device_id": "system", 00:10:13.429 "dma_device_type": 1 00:10:13.429 }, 00:10:13.429 { 00:10:13.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.429 "dma_device_type": 2 00:10:13.429 } 00:10:13.429 ], 00:10:13.429 "driver_specific": {} 00:10:13.429 } 00:10:13.429 ] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.429 "name": "Existed_Raid", 00:10:13.429 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:13.429 "strip_size_kb": 64, 00:10:13.429 "state": "online", 00:10:13.429 "raid_level": "raid0", 00:10:13.429 "superblock": true, 00:10:13.429 "num_base_bdevs": 3, 00:10:13.429 "num_base_bdevs_discovered": 3, 00:10:13.429 "num_base_bdevs_operational": 3, 00:10:13.429 "base_bdevs_list": [ 00:10:13.429 { 00:10:13.429 "name": "NewBaseBdev", 00:10:13.429 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:13.429 "is_configured": true, 00:10:13.429 "data_offset": 2048, 00:10:13.429 "data_size": 63488 00:10:13.429 }, 00:10:13.429 { 00:10:13.429 "name": "BaseBdev2", 00:10:13.429 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:13.429 "is_configured": true, 00:10:13.429 "data_offset": 2048, 00:10:13.429 "data_size": 63488 00:10:13.429 }, 00:10:13.429 { 00:10:13.429 "name": "BaseBdev3", 00:10:13.429 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:13.429 "is_configured": true, 00:10:13.429 "data_offset": 2048, 00:10:13.429 "data_size": 63488 00:10:13.429 } 00:10:13.429 ] 00:10:13.429 }' 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.429 09:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.689 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.689 [2024-10-11 09:43:58.312257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.948 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.948 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.948 "name": "Existed_Raid", 00:10:13.948 "aliases": [ 00:10:13.948 "aba9007f-6362-4b66-8b76-2b9ca40b96a4" 00:10:13.948 ], 00:10:13.948 "product_name": "Raid Volume", 00:10:13.948 "block_size": 512, 00:10:13.948 "num_blocks": 190464, 00:10:13.948 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:13.948 "assigned_rate_limits": { 00:10:13.948 "rw_ios_per_sec": 0, 00:10:13.948 "rw_mbytes_per_sec": 0, 00:10:13.948 "r_mbytes_per_sec": 0, 00:10:13.948 "w_mbytes_per_sec": 0 00:10:13.948 }, 00:10:13.948 "claimed": false, 00:10:13.948 "zoned": false, 00:10:13.948 "supported_io_types": { 00:10:13.948 "read": true, 00:10:13.948 "write": true, 00:10:13.948 "unmap": true, 00:10:13.948 "flush": true, 00:10:13.948 "reset": true, 00:10:13.948 "nvme_admin": false, 00:10:13.949 "nvme_io": false, 00:10:13.949 "nvme_io_md": false, 00:10:13.949 "write_zeroes": true, 00:10:13.949 "zcopy": false, 00:10:13.949 "get_zone_info": false, 00:10:13.949 "zone_management": false, 00:10:13.949 "zone_append": false, 00:10:13.949 "compare": false, 00:10:13.949 "compare_and_write": false, 00:10:13.949 "abort": false, 00:10:13.949 "seek_hole": false, 00:10:13.949 "seek_data": false, 00:10:13.949 "copy": false, 00:10:13.949 "nvme_iov_md": false 00:10:13.949 }, 00:10:13.949 "memory_domains": [ 00:10:13.949 { 00:10:13.949 "dma_device_id": "system", 00:10:13.949 "dma_device_type": 1 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.949 "dma_device_type": 2 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "dma_device_id": "system", 00:10:13.949 "dma_device_type": 1 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.949 "dma_device_type": 2 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "dma_device_id": "system", 00:10:13.949 "dma_device_type": 1 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.949 "dma_device_type": 2 00:10:13.949 } 00:10:13.949 ], 00:10:13.949 "driver_specific": { 00:10:13.949 "raid": { 00:10:13.949 "uuid": "aba9007f-6362-4b66-8b76-2b9ca40b96a4", 00:10:13.949 "strip_size_kb": 64, 00:10:13.949 "state": "online", 00:10:13.949 "raid_level": "raid0", 00:10:13.949 "superblock": true, 00:10:13.949 "num_base_bdevs": 3, 00:10:13.949 "num_base_bdevs_discovered": 3, 00:10:13.949 "num_base_bdevs_operational": 3, 00:10:13.949 "base_bdevs_list": [ 00:10:13.949 { 00:10:13.949 "name": "NewBaseBdev", 00:10:13.949 "uuid": "f779a9c8-c911-41fa-9366-cafa0be056a4", 00:10:13.949 "is_configured": true, 00:10:13.949 "data_offset": 2048, 00:10:13.949 "data_size": 63488 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "name": "BaseBdev2", 00:10:13.949 "uuid": "06cdca40-915c-4bda-8e7f-f56681ec439a", 00:10:13.949 "is_configured": true, 00:10:13.949 "data_offset": 2048, 00:10:13.949 "data_size": 63488 00:10:13.949 }, 00:10:13.949 { 00:10:13.949 "name": "BaseBdev3", 00:10:13.949 "uuid": "97d284b3-225b-4bee-b02e-10208b1cff67", 00:10:13.949 "is_configured": true, 00:10:13.949 "data_offset": 2048, 00:10:13.949 "data_size": 63488 00:10:13.949 } 00:10:13.949 ] 00:10:13.949 } 00:10:13.949 } 00:10:13.949 }' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.949 BaseBdev2 00:10:13.949 BaseBdev3' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.949 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.208 [2024-10-11 09:43:58.619530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.208 [2024-10-11 09:43:58.619619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.208 [2024-10-11 09:43:58.619723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.208 [2024-10-11 09:43:58.619822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.208 [2024-10-11 09:43:58.619839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64872 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64872 ']' 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64872 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.208 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64872 00:10:14.209 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.209 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.209 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64872' 00:10:14.209 killing process with pid 64872 00:10:14.209 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64872 00:10:14.209 [2024-10-11 09:43:58.661229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.209 09:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64872 00:10:14.467 [2024-10-11 09:43:58.991951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.843 09:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.843 00:10:15.843 real 0m11.159s 00:10:15.843 user 0m17.790s 00:10:15.843 sys 0m1.877s 00:10:15.843 09:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.843 09:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.843 ************************************ 00:10:15.843 END TEST raid_state_function_test_sb 00:10:15.843 ************************************ 00:10:15.843 09:44:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:15.843 09:44:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:15.843 09:44:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.843 09:44:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.843 ************************************ 00:10:15.843 START TEST raid_superblock_test 00:10:15.843 ************************************ 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:15.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65498 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65498 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65498 ']' 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.843 09:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.843 [2024-10-11 09:44:00.352205] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:15.843 [2024-10-11 09:44:00.352345] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65498 ] 00:10:16.102 [2024-10-11 09:44:00.522076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.102 [2024-10-11 09:44:00.647362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.360 [2024-10-11 09:44:00.864911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.360 [2024-10-11 09:44:00.865034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.618 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.876 malloc1 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 [2024-10-11 09:44:01.264585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.877 [2024-10-11 09:44:01.264697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.877 [2024-10-11 09:44:01.264749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.877 [2024-10-11 09:44:01.264785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.877 [2024-10-11 09:44:01.266914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.877 [2024-10-11 09:44:01.266987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.877 pt1 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 malloc2 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 [2024-10-11 09:44:01.323150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.877 [2024-10-11 09:44:01.323256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.877 [2024-10-11 09:44:01.323315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.877 [2024-10-11 09:44:01.323351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.877 [2024-10-11 09:44:01.325691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.877 [2024-10-11 09:44:01.325778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.877 pt2 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 malloc3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 [2024-10-11 09:44:01.398146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.877 [2024-10-11 09:44:01.398203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.877 [2024-10-11 09:44:01.398225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.877 [2024-10-11 09:44:01.398235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.877 [2024-10-11 09:44:01.400543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.877 [2024-10-11 09:44:01.400597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.877 pt3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 [2024-10-11 09:44:01.410201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.877 [2024-10-11 09:44:01.412272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.877 [2024-10-11 09:44:01.412441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.877 [2024-10-11 09:44:01.412642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.877 [2024-10-11 09:44:01.412660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.877 [2024-10-11 09:44:01.412959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.877 [2024-10-11 09:44:01.413164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.877 [2024-10-11 09:44:01.413177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.877 [2024-10-11 09:44:01.413356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.877 "name": "raid_bdev1", 00:10:16.877 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:16.877 "strip_size_kb": 64, 00:10:16.877 "state": "online", 00:10:16.877 "raid_level": "raid0", 00:10:16.877 "superblock": true, 00:10:16.877 "num_base_bdevs": 3, 00:10:16.877 "num_base_bdevs_discovered": 3, 00:10:16.877 "num_base_bdevs_operational": 3, 00:10:16.877 "base_bdevs_list": [ 00:10:16.877 { 00:10:16.877 "name": "pt1", 00:10:16.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.877 "is_configured": true, 00:10:16.877 "data_offset": 2048, 00:10:16.877 "data_size": 63488 00:10:16.877 }, 00:10:16.877 { 00:10:16.877 "name": "pt2", 00:10:16.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.877 "is_configured": true, 00:10:16.877 "data_offset": 2048, 00:10:16.877 "data_size": 63488 00:10:16.877 }, 00:10:16.877 { 00:10:16.877 "name": "pt3", 00:10:16.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.877 "is_configured": true, 00:10:16.877 "data_offset": 2048, 00:10:16.877 "data_size": 63488 00:10:16.877 } 00:10:16.877 ] 00:10:16.877 }' 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.877 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.444 [2024-10-11 09:44:01.897714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.444 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.444 "name": "raid_bdev1", 00:10:17.444 "aliases": [ 00:10:17.444 "81d1f129-b2ea-4a95-ab1b-c10893443033" 00:10:17.444 ], 00:10:17.444 "product_name": "Raid Volume", 00:10:17.444 "block_size": 512, 00:10:17.444 "num_blocks": 190464, 00:10:17.444 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:17.444 "assigned_rate_limits": { 00:10:17.444 "rw_ios_per_sec": 0, 00:10:17.444 "rw_mbytes_per_sec": 0, 00:10:17.444 "r_mbytes_per_sec": 0, 00:10:17.444 "w_mbytes_per_sec": 0 00:10:17.444 }, 00:10:17.444 "claimed": false, 00:10:17.444 "zoned": false, 00:10:17.444 "supported_io_types": { 00:10:17.444 "read": true, 00:10:17.444 "write": true, 00:10:17.444 "unmap": true, 00:10:17.444 "flush": true, 00:10:17.444 "reset": true, 00:10:17.444 "nvme_admin": false, 00:10:17.444 "nvme_io": false, 00:10:17.444 "nvme_io_md": false, 00:10:17.444 "write_zeroes": true, 00:10:17.444 "zcopy": false, 00:10:17.444 "get_zone_info": false, 00:10:17.444 "zone_management": false, 00:10:17.444 "zone_append": false, 00:10:17.444 "compare": false, 00:10:17.444 "compare_and_write": false, 00:10:17.444 "abort": false, 00:10:17.444 "seek_hole": false, 00:10:17.444 "seek_data": false, 00:10:17.444 "copy": false, 00:10:17.444 "nvme_iov_md": false 00:10:17.444 }, 00:10:17.444 "memory_domains": [ 00:10:17.444 { 00:10:17.444 "dma_device_id": "system", 00:10:17.444 "dma_device_type": 1 00:10:17.444 }, 00:10:17.444 { 00:10:17.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.444 "dma_device_type": 2 00:10:17.444 }, 00:10:17.444 { 00:10:17.444 "dma_device_id": "system", 00:10:17.444 "dma_device_type": 1 00:10:17.444 }, 00:10:17.444 { 00:10:17.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.444 "dma_device_type": 2 00:10:17.444 }, 00:10:17.444 { 00:10:17.444 "dma_device_id": "system", 00:10:17.444 "dma_device_type": 1 00:10:17.444 }, 00:10:17.444 { 00:10:17.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.444 "dma_device_type": 2 00:10:17.444 } 00:10:17.444 ], 00:10:17.444 "driver_specific": { 00:10:17.444 "raid": { 00:10:17.444 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:17.444 "strip_size_kb": 64, 00:10:17.444 "state": "online", 00:10:17.444 "raid_level": "raid0", 00:10:17.444 "superblock": true, 00:10:17.444 "num_base_bdevs": 3, 00:10:17.444 "num_base_bdevs_discovered": 3, 00:10:17.444 "num_base_bdevs_operational": 3, 00:10:17.444 "base_bdevs_list": [ 00:10:17.444 { 00:10:17.444 "name": "pt1", 00:10:17.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.445 "is_configured": true, 00:10:17.445 "data_offset": 2048, 00:10:17.445 "data_size": 63488 00:10:17.445 }, 00:10:17.445 { 00:10:17.445 "name": "pt2", 00:10:17.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.445 "is_configured": true, 00:10:17.445 "data_offset": 2048, 00:10:17.445 "data_size": 63488 00:10:17.445 }, 00:10:17.445 { 00:10:17.445 "name": "pt3", 00:10:17.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.445 "is_configured": true, 00:10:17.445 "data_offset": 2048, 00:10:17.445 "data_size": 63488 00:10:17.445 } 00:10:17.445 ] 00:10:17.445 } 00:10:17.445 } 00:10:17.445 }' 00:10:17.445 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.445 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.445 pt2 00:10:17.445 pt3' 00:10:17.445 09:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.445 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.703 [2024-10-11 09:44:02.181177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81d1f129-b2ea-4a95-ab1b-c10893443033 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 81d1f129-b2ea-4a95-ab1b-c10893443033 ']' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.703 [2024-10-11 09:44:02.224805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.703 [2024-10-11 09:44:02.224893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.703 [2024-10-11 09:44:02.225035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.703 [2024-10-11 09:44:02.225154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.703 [2024-10-11 09:44:02.225208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.703 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.704 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 [2024-10-11 09:44:02.348644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:17.962 [2024-10-11 09:44:02.350775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:17.962 [2024-10-11 09:44:02.350856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:17.962 [2024-10-11 09:44:02.350911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:17.962 [2024-10-11 09:44:02.350964] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:17.962 [2024-10-11 09:44:02.350986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:17.962 [2024-10-11 09:44:02.351004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.962 [2024-10-11 09:44:02.351018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:17.962 request: 00:10:17.962 { 00:10:17.962 "name": "raid_bdev1", 00:10:17.962 "raid_level": "raid0", 00:10:17.962 "base_bdevs": [ 00:10:17.962 "malloc1", 00:10:17.962 "malloc2", 00:10:17.962 "malloc3" 00:10:17.962 ], 00:10:17.962 "strip_size_kb": 64, 00:10:17.962 "superblock": false, 00:10:17.962 "method": "bdev_raid_create", 00:10:17.962 "req_id": 1 00:10:17.962 } 00:10:17.962 Got JSON-RPC error response 00:10:17.962 response: 00:10:17.962 { 00:10:17.962 "code": -17, 00:10:17.962 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:17.962 } 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:17.962 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.963 [2024-10-11 09:44:02.408486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.963 [2024-10-11 09:44:02.408562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.963 [2024-10-11 09:44:02.408586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:17.963 [2024-10-11 09:44:02.408597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.963 [2024-10-11 09:44:02.411044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.963 [2024-10-11 09:44:02.411088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.963 [2024-10-11 09:44:02.411187] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.963 [2024-10-11 09:44:02.411247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.963 pt1 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.963 "name": "raid_bdev1", 00:10:17.963 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:17.963 "strip_size_kb": 64, 00:10:17.963 "state": "configuring", 00:10:17.963 "raid_level": "raid0", 00:10:17.963 "superblock": true, 00:10:17.963 "num_base_bdevs": 3, 00:10:17.963 "num_base_bdevs_discovered": 1, 00:10:17.963 "num_base_bdevs_operational": 3, 00:10:17.963 "base_bdevs_list": [ 00:10:17.963 { 00:10:17.963 "name": "pt1", 00:10:17.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.963 "is_configured": true, 00:10:17.963 "data_offset": 2048, 00:10:17.963 "data_size": 63488 00:10:17.963 }, 00:10:17.963 { 00:10:17.963 "name": null, 00:10:17.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.963 "is_configured": false, 00:10:17.963 "data_offset": 2048, 00:10:17.963 "data_size": 63488 00:10:17.963 }, 00:10:17.963 { 00:10:17.963 "name": null, 00:10:17.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.963 "is_configured": false, 00:10:17.963 "data_offset": 2048, 00:10:17.963 "data_size": 63488 00:10:17.963 } 00:10:17.963 ] 00:10:17.963 }' 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.963 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 [2024-10-11 09:44:02.867917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.532 [2024-10-11 09:44:02.867986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.532 [2024-10-11 09:44:02.868011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:18.532 [2024-10-11 09:44:02.868021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.532 [2024-10-11 09:44:02.868533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.532 [2024-10-11 09:44:02.868568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.532 [2024-10-11 09:44:02.868668] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.532 [2024-10-11 09:44:02.868710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.532 pt2 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 [2024-10-11 09:44:02.875980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.532 "name": "raid_bdev1", 00:10:18.532 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:18.532 "strip_size_kb": 64, 00:10:18.532 "state": "configuring", 00:10:18.532 "raid_level": "raid0", 00:10:18.532 "superblock": true, 00:10:18.532 "num_base_bdevs": 3, 00:10:18.532 "num_base_bdevs_discovered": 1, 00:10:18.532 "num_base_bdevs_operational": 3, 00:10:18.532 "base_bdevs_list": [ 00:10:18.532 { 00:10:18.532 "name": "pt1", 00:10:18.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.532 "is_configured": true, 00:10:18.532 "data_offset": 2048, 00:10:18.532 "data_size": 63488 00:10:18.532 }, 00:10:18.532 { 00:10:18.532 "name": null, 00:10:18.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.532 "is_configured": false, 00:10:18.532 "data_offset": 0, 00:10:18.532 "data_size": 63488 00:10:18.532 }, 00:10:18.532 { 00:10:18.532 "name": null, 00:10:18.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.532 "is_configured": false, 00:10:18.532 "data_offset": 2048, 00:10:18.532 "data_size": 63488 00:10:18.532 } 00:10:18.532 ] 00:10:18.532 }' 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.532 09:44:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 [2024-10-11 09:44:03.327194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.793 [2024-10-11 09:44:03.327274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.793 [2024-10-11 09:44:03.327317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:18.793 [2024-10-11 09:44:03.327332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.793 [2024-10-11 09:44:03.327890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.793 [2024-10-11 09:44:03.327927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.793 [2024-10-11 09:44:03.328037] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.793 [2024-10-11 09:44:03.328085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.793 pt2 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 [2024-10-11 09:44:03.335176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.793 [2024-10-11 09:44:03.335247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.793 [2024-10-11 09:44:03.335264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:18.793 [2024-10-11 09:44:03.335277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.793 [2024-10-11 09:44:03.335768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.793 [2024-10-11 09:44:03.335803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.793 [2024-10-11 09:44:03.335883] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.793 [2024-10-11 09:44:03.335916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.793 [2024-10-11 09:44:03.336062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.793 [2024-10-11 09:44:03.336086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:18.793 [2024-10-11 09:44:03.336382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:18.793 [2024-10-11 09:44:03.336568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.793 [2024-10-11 09:44:03.336585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:18.793 [2024-10-11 09:44:03.336797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.793 pt3 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.793 "name": "raid_bdev1", 00:10:18.793 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:18.793 "strip_size_kb": 64, 00:10:18.793 "state": "online", 00:10:18.793 "raid_level": "raid0", 00:10:18.793 "superblock": true, 00:10:18.793 "num_base_bdevs": 3, 00:10:18.793 "num_base_bdevs_discovered": 3, 00:10:18.793 "num_base_bdevs_operational": 3, 00:10:18.793 "base_bdevs_list": [ 00:10:18.793 { 00:10:18.793 "name": "pt1", 00:10:18.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.793 "is_configured": true, 00:10:18.793 "data_offset": 2048, 00:10:18.793 "data_size": 63488 00:10:18.793 }, 00:10:18.793 { 00:10:18.793 "name": "pt2", 00:10:18.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.793 "is_configured": true, 00:10:18.793 "data_offset": 2048, 00:10:18.793 "data_size": 63488 00:10:18.793 }, 00:10:18.793 { 00:10:18.793 "name": "pt3", 00:10:18.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.793 "is_configured": true, 00:10:18.793 "data_offset": 2048, 00:10:18.793 "data_size": 63488 00:10:18.793 } 00:10:18.793 ] 00:10:18.793 }' 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.793 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.362 [2024-10-11 09:44:03.810848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.362 "name": "raid_bdev1", 00:10:19.362 "aliases": [ 00:10:19.362 "81d1f129-b2ea-4a95-ab1b-c10893443033" 00:10:19.362 ], 00:10:19.362 "product_name": "Raid Volume", 00:10:19.362 "block_size": 512, 00:10:19.362 "num_blocks": 190464, 00:10:19.362 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:19.362 "assigned_rate_limits": { 00:10:19.362 "rw_ios_per_sec": 0, 00:10:19.362 "rw_mbytes_per_sec": 0, 00:10:19.362 "r_mbytes_per_sec": 0, 00:10:19.362 "w_mbytes_per_sec": 0 00:10:19.362 }, 00:10:19.362 "claimed": false, 00:10:19.362 "zoned": false, 00:10:19.362 "supported_io_types": { 00:10:19.362 "read": true, 00:10:19.362 "write": true, 00:10:19.362 "unmap": true, 00:10:19.362 "flush": true, 00:10:19.362 "reset": true, 00:10:19.362 "nvme_admin": false, 00:10:19.362 "nvme_io": false, 00:10:19.362 "nvme_io_md": false, 00:10:19.362 "write_zeroes": true, 00:10:19.362 "zcopy": false, 00:10:19.362 "get_zone_info": false, 00:10:19.362 "zone_management": false, 00:10:19.362 "zone_append": false, 00:10:19.362 "compare": false, 00:10:19.362 "compare_and_write": false, 00:10:19.362 "abort": false, 00:10:19.362 "seek_hole": false, 00:10:19.362 "seek_data": false, 00:10:19.362 "copy": false, 00:10:19.362 "nvme_iov_md": false 00:10:19.362 }, 00:10:19.362 "memory_domains": [ 00:10:19.362 { 00:10:19.362 "dma_device_id": "system", 00:10:19.362 "dma_device_type": 1 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.362 "dma_device_type": 2 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "dma_device_id": "system", 00:10:19.362 "dma_device_type": 1 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.362 "dma_device_type": 2 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "dma_device_id": "system", 00:10:19.362 "dma_device_type": 1 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.362 "dma_device_type": 2 00:10:19.362 } 00:10:19.362 ], 00:10:19.362 "driver_specific": { 00:10:19.362 "raid": { 00:10:19.362 "uuid": "81d1f129-b2ea-4a95-ab1b-c10893443033", 00:10:19.362 "strip_size_kb": 64, 00:10:19.362 "state": "online", 00:10:19.362 "raid_level": "raid0", 00:10:19.362 "superblock": true, 00:10:19.362 "num_base_bdevs": 3, 00:10:19.362 "num_base_bdevs_discovered": 3, 00:10:19.362 "num_base_bdevs_operational": 3, 00:10:19.362 "base_bdevs_list": [ 00:10:19.362 { 00:10:19.362 "name": "pt1", 00:10:19.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.362 "is_configured": true, 00:10:19.362 "data_offset": 2048, 00:10:19.362 "data_size": 63488 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "name": "pt2", 00:10:19.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.362 "is_configured": true, 00:10:19.362 "data_offset": 2048, 00:10:19.362 "data_size": 63488 00:10:19.362 }, 00:10:19.362 { 00:10:19.362 "name": "pt3", 00:10:19.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.362 "is_configured": true, 00:10:19.362 "data_offset": 2048, 00:10:19.362 "data_size": 63488 00:10:19.362 } 00:10:19.362 ] 00:10:19.362 } 00:10:19.362 } 00:10:19.362 }' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:19.362 pt2 00:10:19.362 pt3' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.362 09:44:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:19.622 [2024-10-11 09:44:04.086371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 81d1f129-b2ea-4a95-ab1b-c10893443033 '!=' 81d1f129-b2ea-4a95-ab1b-c10893443033 ']' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65498 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65498 ']' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65498 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65498 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.622 killing process with pid 65498 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65498' 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65498 00:10:19.622 [2024-10-11 09:44:04.155541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.622 09:44:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65498 00:10:19.622 [2024-10-11 09:44:04.155731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.622 [2024-10-11 09:44:04.155851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.622 [2024-10-11 09:44:04.155875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:19.882 [2024-10-11 09:44:04.500186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.277 09:44:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:21.277 00:10:21.277 real 0m5.550s 00:10:21.277 user 0m7.886s 00:10:21.277 sys 0m0.883s 00:10:21.277 09:44:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.277 09:44:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.277 ************************************ 00:10:21.277 END TEST raid_superblock_test 00:10:21.277 ************************************ 00:10:21.277 09:44:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:21.277 09:44:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.277 09:44:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.277 09:44:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.277 ************************************ 00:10:21.277 START TEST raid_read_error_test 00:10:21.277 ************************************ 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d0y93NlTlq 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65751 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65751 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65751 ']' 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.277 09:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.536 [2024-10-11 09:44:06.002001] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:21.536 [2024-10-11 09:44:06.002148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65751 ] 00:10:21.795 [2024-10-11 09:44:06.177511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.795 [2024-10-11 09:44:06.332907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.054 [2024-10-11 09:44:06.600641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.054 [2024-10-11 09:44:06.600766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.314 BaseBdev1_malloc 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.314 true 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.314 [2024-10-11 09:44:06.939694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:22.314 [2024-10-11 09:44:06.939802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.314 [2024-10-11 09:44:06.939832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:22.314 [2024-10-11 09:44:06.939849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.314 [2024-10-11 09:44:06.942455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.314 [2024-10-11 09:44:06.942510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.314 BaseBdev1 00:10:22.314 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.574 09:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.574 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 09:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 BaseBdev2_malloc 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 true 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 [2024-10-11 09:44:07.016479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.574 [2024-10-11 09:44:07.016570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.574 [2024-10-11 09:44:07.016599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.574 [2024-10-11 09:44:07.016615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.574 [2024-10-11 09:44:07.019224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.574 [2024-10-11 09:44:07.019302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.574 BaseBdev2 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 BaseBdev3_malloc 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 true 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 [2024-10-11 09:44:07.106100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.574 [2024-10-11 09:44:07.106198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.575 [2024-10-11 09:44:07.106229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.575 [2024-10-11 09:44:07.106245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.575 [2024-10-11 09:44:07.109055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.575 [2024-10-11 09:44:07.109107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.575 BaseBdev3 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.575 [2024-10-11 09:44:07.118156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.575 [2024-10-11 09:44:07.120482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.575 [2024-10-11 09:44:07.120594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.575 [2024-10-11 09:44:07.120868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.575 [2024-10-11 09:44:07.120892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.575 [2024-10-11 09:44:07.121234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.575 [2024-10-11 09:44:07.121449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.575 [2024-10-11 09:44:07.121476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:22.575 [2024-10-11 09:44:07.121696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.575 "name": "raid_bdev1", 00:10:22.575 "uuid": "01d95412-0bb9-49ef-9b39-324673d74817", 00:10:22.575 "strip_size_kb": 64, 00:10:22.575 "state": "online", 00:10:22.575 "raid_level": "raid0", 00:10:22.575 "superblock": true, 00:10:22.575 "num_base_bdevs": 3, 00:10:22.575 "num_base_bdevs_discovered": 3, 00:10:22.575 "num_base_bdevs_operational": 3, 00:10:22.575 "base_bdevs_list": [ 00:10:22.575 { 00:10:22.575 "name": "BaseBdev1", 00:10:22.575 "uuid": "20dda667-d716-53eb-ad6e-abd150e931de", 00:10:22.575 "is_configured": true, 00:10:22.575 "data_offset": 2048, 00:10:22.575 "data_size": 63488 00:10:22.575 }, 00:10:22.575 { 00:10:22.575 "name": "BaseBdev2", 00:10:22.575 "uuid": "de1d1a51-e5a8-5925-b932-c48d0d38b9f2", 00:10:22.575 "is_configured": true, 00:10:22.575 "data_offset": 2048, 00:10:22.575 "data_size": 63488 00:10:22.575 }, 00:10:22.575 { 00:10:22.575 "name": "BaseBdev3", 00:10:22.575 "uuid": "2ad89f7a-e9cf-5cd5-900d-a059800b8037", 00:10:22.575 "is_configured": true, 00:10:22.575 "data_offset": 2048, 00:10:22.575 "data_size": 63488 00:10:22.575 } 00:10:22.575 ] 00:10:22.575 }' 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.575 09:44:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.144 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:23.144 09:44:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.144 [2024-10-11 09:44:07.714953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.083 "name": "raid_bdev1", 00:10:24.083 "uuid": "01d95412-0bb9-49ef-9b39-324673d74817", 00:10:24.083 "strip_size_kb": 64, 00:10:24.083 "state": "online", 00:10:24.083 "raid_level": "raid0", 00:10:24.083 "superblock": true, 00:10:24.083 "num_base_bdevs": 3, 00:10:24.083 "num_base_bdevs_discovered": 3, 00:10:24.083 "num_base_bdevs_operational": 3, 00:10:24.083 "base_bdevs_list": [ 00:10:24.083 { 00:10:24.083 "name": "BaseBdev1", 00:10:24.083 "uuid": "20dda667-d716-53eb-ad6e-abd150e931de", 00:10:24.083 "is_configured": true, 00:10:24.083 "data_offset": 2048, 00:10:24.083 "data_size": 63488 00:10:24.083 }, 00:10:24.083 { 00:10:24.083 "name": "BaseBdev2", 00:10:24.083 "uuid": "de1d1a51-e5a8-5925-b932-c48d0d38b9f2", 00:10:24.083 "is_configured": true, 00:10:24.083 "data_offset": 2048, 00:10:24.083 "data_size": 63488 00:10:24.083 }, 00:10:24.083 { 00:10:24.083 "name": "BaseBdev3", 00:10:24.083 "uuid": "2ad89f7a-e9cf-5cd5-900d-a059800b8037", 00:10:24.083 "is_configured": true, 00:10:24.083 "data_offset": 2048, 00:10:24.083 "data_size": 63488 00:10:24.083 } 00:10:24.083 ] 00:10:24.083 }' 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.083 09:44:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.652 [2024-10-11 09:44:09.097821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.652 [2024-10-11 09:44:09.097879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.652 [2024-10-11 09:44:09.101165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.652 [2024-10-11 09:44:09.101238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.652 [2024-10-11 09:44:09.101294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.652 [2024-10-11 09:44:09.101308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:24.652 { 00:10:24.652 "results": [ 00:10:24.652 { 00:10:24.652 "job": "raid_bdev1", 00:10:24.652 "core_mask": "0x1", 00:10:24.652 "workload": "randrw", 00:10:24.652 "percentage": 50, 00:10:24.652 "status": "finished", 00:10:24.652 "queue_depth": 1, 00:10:24.652 "io_size": 131072, 00:10:24.652 "runtime": 1.383121, 00:10:24.652 "iops": 11899.898851944263, 00:10:24.652 "mibps": 1487.4873564930328, 00:10:24.652 "io_failed": 1, 00:10:24.652 "io_timeout": 0, 00:10:24.652 "avg_latency_us": 118.31743063772439, 00:10:24.652 "min_latency_us": 28.618340611353712, 00:10:24.652 "max_latency_us": 1738.564192139738 00:10:24.652 } 00:10:24.652 ], 00:10:24.652 "core_count": 1 00:10:24.652 } 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65751 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65751 ']' 00:10:24.652 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65751 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65751 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.653 killing process with pid 65751 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65751' 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65751 00:10:24.653 [2024-10-11 09:44:09.131815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.653 09:44:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65751 00:10:24.912 [2024-10-11 09:44:09.400584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d0y93NlTlq 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:26.292 00:10:26.292 real 0m4.845s 00:10:26.292 user 0m5.639s 00:10:26.292 sys 0m0.727s 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.292 09:44:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.292 ************************************ 00:10:26.292 END TEST raid_read_error_test 00:10:26.292 ************************************ 00:10:26.292 09:44:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:26.292 09:44:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:26.292 09:44:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.292 09:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.292 ************************************ 00:10:26.292 START TEST raid_write_error_test 00:10:26.292 ************************************ 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qprYi43xoS 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65902 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65902 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65902 ']' 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.292 09:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.292 [2024-10-11 09:44:10.906528] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:26.292 [2024-10-11 09:44:10.906664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65902 ] 00:10:26.551 [2024-10-11 09:44:11.077817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.810 [2024-10-11 09:44:11.214796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.069 [2024-10-11 09:44:11.453686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.069 [2024-10-11 09:44:11.453754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 BaseBdev1_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 true 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 [2024-10-11 09:44:11.828895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:27.330 [2024-10-11 09:44:11.828958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.330 [2024-10-11 09:44:11.828982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:27.330 [2024-10-11 09:44:11.828994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.330 [2024-10-11 09:44:11.831206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.330 [2024-10-11 09:44:11.831247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:27.330 BaseBdev1 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 BaseBdev2_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 true 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.330 [2024-10-11 09:44:11.898067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:27.330 [2024-10-11 09:44:11.898146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.330 [2024-10-11 09:44:11.898167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.330 [2024-10-11 09:44:11.898180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.330 [2024-10-11 09:44:11.900657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.330 [2024-10-11 09:44:11.900720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:27.330 BaseBdev2 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.330 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.597 BaseBdev3_malloc 00:10:27.597 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.597 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:27.597 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.597 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.597 true 00:10:27.597 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.597 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.598 [2024-10-11 09:44:11.981698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:27.598 [2024-10-11 09:44:11.981789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.598 [2024-10-11 09:44:11.981815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:27.598 [2024-10-11 09:44:11.981827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.598 [2024-10-11 09:44:11.984301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.598 [2024-10-11 09:44:11.984345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:27.598 BaseBdev3 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.598 [2024-10-11 09:44:11.989724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.598 [2024-10-11 09:44:11.991624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.598 [2024-10-11 09:44:11.991728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.598 [2024-10-11 09:44:11.991952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:27.598 [2024-10-11 09:44:11.991973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:27.598 [2024-10-11 09:44:11.992282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:27.598 [2024-10-11 09:44:11.992467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:27.598 [2024-10-11 09:44:11.992491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:27.598 [2024-10-11 09:44:11.992671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.598 09:44:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.598 09:44:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.598 09:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.598 "name": "raid_bdev1", 00:10:27.598 "uuid": "b03601cb-df4f-4e5f-b16f-2a4306e7c1f9", 00:10:27.598 "strip_size_kb": 64, 00:10:27.598 "state": "online", 00:10:27.598 "raid_level": "raid0", 00:10:27.598 "superblock": true, 00:10:27.598 "num_base_bdevs": 3, 00:10:27.598 "num_base_bdevs_discovered": 3, 00:10:27.598 "num_base_bdevs_operational": 3, 00:10:27.598 "base_bdevs_list": [ 00:10:27.598 { 00:10:27.598 "name": "BaseBdev1", 00:10:27.598 "uuid": "20f173aa-4ad3-5e71-a663-c04cc49b4617", 00:10:27.598 "is_configured": true, 00:10:27.598 "data_offset": 2048, 00:10:27.598 "data_size": 63488 00:10:27.598 }, 00:10:27.598 { 00:10:27.598 "name": "BaseBdev2", 00:10:27.598 "uuid": "ce5861cb-ba33-50ae-b36d-5b96d50e7fcd", 00:10:27.598 "is_configured": true, 00:10:27.598 "data_offset": 2048, 00:10:27.598 "data_size": 63488 00:10:27.598 }, 00:10:27.598 { 00:10:27.598 "name": "BaseBdev3", 00:10:27.598 "uuid": "dca1216e-ff1c-55fd-bd97-dd13fade583d", 00:10:27.598 "is_configured": true, 00:10:27.598 "data_offset": 2048, 00:10:27.598 "data_size": 63488 00:10:27.598 } 00:10:27.598 ] 00:10:27.598 }' 00:10:27.598 09:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.598 09:44:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.856 09:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.856 09:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.114 [2024-10-11 09:44:12.582270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.050 "name": "raid_bdev1", 00:10:29.050 "uuid": "b03601cb-df4f-4e5f-b16f-2a4306e7c1f9", 00:10:29.050 "strip_size_kb": 64, 00:10:29.050 "state": "online", 00:10:29.050 "raid_level": "raid0", 00:10:29.050 "superblock": true, 00:10:29.050 "num_base_bdevs": 3, 00:10:29.050 "num_base_bdevs_discovered": 3, 00:10:29.050 "num_base_bdevs_operational": 3, 00:10:29.050 "base_bdevs_list": [ 00:10:29.050 { 00:10:29.050 "name": "BaseBdev1", 00:10:29.050 "uuid": "20f173aa-4ad3-5e71-a663-c04cc49b4617", 00:10:29.050 "is_configured": true, 00:10:29.050 "data_offset": 2048, 00:10:29.050 "data_size": 63488 00:10:29.050 }, 00:10:29.050 { 00:10:29.050 "name": "BaseBdev2", 00:10:29.050 "uuid": "ce5861cb-ba33-50ae-b36d-5b96d50e7fcd", 00:10:29.050 "is_configured": true, 00:10:29.050 "data_offset": 2048, 00:10:29.050 "data_size": 63488 00:10:29.050 }, 00:10:29.050 { 00:10:29.050 "name": "BaseBdev3", 00:10:29.050 "uuid": "dca1216e-ff1c-55fd-bd97-dd13fade583d", 00:10:29.050 "is_configured": true, 00:10:29.050 "data_offset": 2048, 00:10:29.050 "data_size": 63488 00:10:29.050 } 00:10:29.050 ] 00:10:29.050 }' 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.050 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.309 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.309 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.309 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.309 [2024-10-11 09:44:13.910710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.309 [2024-10-11 09:44:13.910758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.310 [2024-10-11 09:44:13.913948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.310 [2024-10-11 09:44:13.914003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.310 [2024-10-11 09:44:13.914047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.310 [2024-10-11 09:44:13.914058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:29.310 { 00:10:29.310 "results": [ 00:10:29.310 { 00:10:29.310 "job": "raid_bdev1", 00:10:29.310 "core_mask": "0x1", 00:10:29.310 "workload": "randrw", 00:10:29.310 "percentage": 50, 00:10:29.310 "status": "finished", 00:10:29.310 "queue_depth": 1, 00:10:29.310 "io_size": 131072, 00:10:29.310 "runtime": 1.328966, 00:10:29.310 "iops": 14090.653936970548, 00:10:29.310 "mibps": 1761.3317421213185, 00:10:29.310 "io_failed": 1, 00:10:29.310 "io_timeout": 0, 00:10:29.310 "avg_latency_us": 98.66292542141359, 00:10:29.310 "min_latency_us": 26.717903930131005, 00:10:29.310 "max_latency_us": 1674.172925764192 00:10:29.310 } 00:10:29.310 ], 00:10:29.310 "core_count": 1 00:10:29.310 } 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65902 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65902 ']' 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65902 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.310 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65902 00:10:29.569 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.569 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.569 killing process with pid 65902 00:10:29.569 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65902' 00:10:29.569 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65902 00:10:29.569 [2024-10-11 09:44:13.950195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.569 09:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65902 00:10:29.569 [2024-10-11 09:44:14.198781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qprYi43xoS 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:30.980 00:10:30.980 real 0m4.700s 00:10:30.980 user 0m5.569s 00:10:30.980 sys 0m0.588s 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.980 09:44:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.980 ************************************ 00:10:30.980 END TEST raid_write_error_test 00:10:30.980 ************************************ 00:10:30.980 09:44:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.980 09:44:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:30.980 09:44:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.980 09:44:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.980 09:44:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.980 ************************************ 00:10:30.980 START TEST raid_state_function_test 00:10:30.980 ************************************ 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66046 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66046' 00:10:30.980 Process raid pid: 66046 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66046 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 66046 ']' 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.980 09:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.239 [2024-10-11 09:44:15.658911] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:31.239 [2024-10-11 09:44:15.659049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.239 [2024-10-11 09:44:15.827300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.498 [2024-10-11 09:44:15.969046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.757 [2024-10-11 09:44:16.241801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.757 [2024-10-11 09:44:16.241854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.016 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.017 [2024-10-11 09:44:16.585972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.017 [2024-10-11 09:44:16.586039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.017 [2024-10-11 09:44:16.586052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.017 [2024-10-11 09:44:16.586063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.017 [2024-10-11 09:44:16.586072] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.017 [2024-10-11 09:44:16.586083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.017 "name": "Existed_Raid", 00:10:32.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.017 "strip_size_kb": 64, 00:10:32.017 "state": "configuring", 00:10:32.017 "raid_level": "concat", 00:10:32.017 "superblock": false, 00:10:32.017 "num_base_bdevs": 3, 00:10:32.017 "num_base_bdevs_discovered": 0, 00:10:32.017 "num_base_bdevs_operational": 3, 00:10:32.017 "base_bdevs_list": [ 00:10:32.017 { 00:10:32.017 "name": "BaseBdev1", 00:10:32.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.017 "is_configured": false, 00:10:32.017 "data_offset": 0, 00:10:32.017 "data_size": 0 00:10:32.017 }, 00:10:32.017 { 00:10:32.017 "name": "BaseBdev2", 00:10:32.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.017 "is_configured": false, 00:10:32.017 "data_offset": 0, 00:10:32.017 "data_size": 0 00:10:32.017 }, 00:10:32.017 { 00:10:32.017 "name": "BaseBdev3", 00:10:32.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.017 "is_configured": false, 00:10:32.017 "data_offset": 0, 00:10:32.017 "data_size": 0 00:10:32.017 } 00:10:32.017 ] 00:10:32.017 }' 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.017 09:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.584 [2024-10-11 09:44:17.045157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.584 [2024-10-11 09:44:17.045201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.584 [2024-10-11 09:44:17.057154] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.584 [2024-10-11 09:44:17.057233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.584 [2024-10-11 09:44:17.057244] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.584 [2024-10-11 09:44:17.057255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.584 [2024-10-11 09:44:17.057262] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.584 [2024-10-11 09:44:17.057273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.584 [2024-10-11 09:44:17.112522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.584 BaseBdev1 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.584 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.585 [ 00:10:32.585 { 00:10:32.585 "name": "BaseBdev1", 00:10:32.585 "aliases": [ 00:10:32.585 "55c0579d-44f3-45aa-b1e0-bb01805b5db5" 00:10:32.585 ], 00:10:32.585 "product_name": "Malloc disk", 00:10:32.585 "block_size": 512, 00:10:32.585 "num_blocks": 65536, 00:10:32.585 "uuid": "55c0579d-44f3-45aa-b1e0-bb01805b5db5", 00:10:32.585 "assigned_rate_limits": { 00:10:32.585 "rw_ios_per_sec": 0, 00:10:32.585 "rw_mbytes_per_sec": 0, 00:10:32.585 "r_mbytes_per_sec": 0, 00:10:32.585 "w_mbytes_per_sec": 0 00:10:32.585 }, 00:10:32.585 "claimed": true, 00:10:32.585 "claim_type": "exclusive_write", 00:10:32.585 "zoned": false, 00:10:32.585 "supported_io_types": { 00:10:32.585 "read": true, 00:10:32.585 "write": true, 00:10:32.585 "unmap": true, 00:10:32.585 "flush": true, 00:10:32.585 "reset": true, 00:10:32.585 "nvme_admin": false, 00:10:32.585 "nvme_io": false, 00:10:32.585 "nvme_io_md": false, 00:10:32.585 "write_zeroes": true, 00:10:32.585 "zcopy": true, 00:10:32.585 "get_zone_info": false, 00:10:32.585 "zone_management": false, 00:10:32.585 "zone_append": false, 00:10:32.585 "compare": false, 00:10:32.585 "compare_and_write": false, 00:10:32.585 "abort": true, 00:10:32.585 "seek_hole": false, 00:10:32.585 "seek_data": false, 00:10:32.585 "copy": true, 00:10:32.585 "nvme_iov_md": false 00:10:32.585 }, 00:10:32.585 "memory_domains": [ 00:10:32.585 { 00:10:32.585 "dma_device_id": "system", 00:10:32.585 "dma_device_type": 1 00:10:32.585 }, 00:10:32.585 { 00:10:32.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.585 "dma_device_type": 2 00:10:32.585 } 00:10:32.585 ], 00:10:32.585 "driver_specific": {} 00:10:32.585 } 00:10:32.585 ] 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.585 "name": "Existed_Raid", 00:10:32.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.585 "strip_size_kb": 64, 00:10:32.585 "state": "configuring", 00:10:32.585 "raid_level": "concat", 00:10:32.585 "superblock": false, 00:10:32.585 "num_base_bdevs": 3, 00:10:32.585 "num_base_bdevs_discovered": 1, 00:10:32.585 "num_base_bdevs_operational": 3, 00:10:32.585 "base_bdevs_list": [ 00:10:32.585 { 00:10:32.585 "name": "BaseBdev1", 00:10:32.585 "uuid": "55c0579d-44f3-45aa-b1e0-bb01805b5db5", 00:10:32.585 "is_configured": true, 00:10:32.585 "data_offset": 0, 00:10:32.585 "data_size": 65536 00:10:32.585 }, 00:10:32.585 { 00:10:32.585 "name": "BaseBdev2", 00:10:32.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.585 "is_configured": false, 00:10:32.585 "data_offset": 0, 00:10:32.585 "data_size": 0 00:10:32.585 }, 00:10:32.585 { 00:10:32.585 "name": "BaseBdev3", 00:10:32.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.585 "is_configured": false, 00:10:32.585 "data_offset": 0, 00:10:32.585 "data_size": 0 00:10:32.585 } 00:10:32.585 ] 00:10:32.585 }' 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.585 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.152 [2024-10-11 09:44:17.603892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.152 [2024-10-11 09:44:17.603960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.152 [2024-10-11 09:44:17.615951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.152 [2024-10-11 09:44:17.618096] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.152 [2024-10-11 09:44:17.618145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.152 [2024-10-11 09:44:17.618157] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.152 [2024-10-11 09:44:17.618168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.152 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.153 "name": "Existed_Raid", 00:10:33.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.153 "strip_size_kb": 64, 00:10:33.153 "state": "configuring", 00:10:33.153 "raid_level": "concat", 00:10:33.153 "superblock": false, 00:10:33.153 "num_base_bdevs": 3, 00:10:33.153 "num_base_bdevs_discovered": 1, 00:10:33.153 "num_base_bdevs_operational": 3, 00:10:33.153 "base_bdevs_list": [ 00:10:33.153 { 00:10:33.153 "name": "BaseBdev1", 00:10:33.153 "uuid": "55c0579d-44f3-45aa-b1e0-bb01805b5db5", 00:10:33.153 "is_configured": true, 00:10:33.153 "data_offset": 0, 00:10:33.153 "data_size": 65536 00:10:33.153 }, 00:10:33.153 { 00:10:33.153 "name": "BaseBdev2", 00:10:33.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.153 "is_configured": false, 00:10:33.153 "data_offset": 0, 00:10:33.153 "data_size": 0 00:10:33.153 }, 00:10:33.153 { 00:10:33.153 "name": "BaseBdev3", 00:10:33.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.153 "is_configured": false, 00:10:33.153 "data_offset": 0, 00:10:33.153 "data_size": 0 00:10:33.153 } 00:10:33.153 ] 00:10:33.153 }' 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.153 09:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.720 [2024-10-11 09:44:18.101225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.720 BaseBdev2 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.720 [ 00:10:33.720 { 00:10:33.720 "name": "BaseBdev2", 00:10:33.720 "aliases": [ 00:10:33.720 "974bc579-2299-4382-8579-d4009be39032" 00:10:33.720 ], 00:10:33.720 "product_name": "Malloc disk", 00:10:33.720 "block_size": 512, 00:10:33.720 "num_blocks": 65536, 00:10:33.720 "uuid": "974bc579-2299-4382-8579-d4009be39032", 00:10:33.720 "assigned_rate_limits": { 00:10:33.720 "rw_ios_per_sec": 0, 00:10:33.720 "rw_mbytes_per_sec": 0, 00:10:33.720 "r_mbytes_per_sec": 0, 00:10:33.720 "w_mbytes_per_sec": 0 00:10:33.720 }, 00:10:33.720 "claimed": true, 00:10:33.720 "claim_type": "exclusive_write", 00:10:33.720 "zoned": false, 00:10:33.720 "supported_io_types": { 00:10:33.720 "read": true, 00:10:33.720 "write": true, 00:10:33.720 "unmap": true, 00:10:33.720 "flush": true, 00:10:33.720 "reset": true, 00:10:33.720 "nvme_admin": false, 00:10:33.720 "nvme_io": false, 00:10:33.720 "nvme_io_md": false, 00:10:33.720 "write_zeroes": true, 00:10:33.720 "zcopy": true, 00:10:33.720 "get_zone_info": false, 00:10:33.720 "zone_management": false, 00:10:33.720 "zone_append": false, 00:10:33.720 "compare": false, 00:10:33.720 "compare_and_write": false, 00:10:33.720 "abort": true, 00:10:33.720 "seek_hole": false, 00:10:33.720 "seek_data": false, 00:10:33.720 "copy": true, 00:10:33.720 "nvme_iov_md": false 00:10:33.720 }, 00:10:33.720 "memory_domains": [ 00:10:33.720 { 00:10:33.720 "dma_device_id": "system", 00:10:33.720 "dma_device_type": 1 00:10:33.720 }, 00:10:33.720 { 00:10:33.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.720 "dma_device_type": 2 00:10:33.720 } 00:10:33.720 ], 00:10:33.720 "driver_specific": {} 00:10:33.720 } 00:10:33.720 ] 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.720 "name": "Existed_Raid", 00:10:33.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.720 "strip_size_kb": 64, 00:10:33.720 "state": "configuring", 00:10:33.720 "raid_level": "concat", 00:10:33.720 "superblock": false, 00:10:33.720 "num_base_bdevs": 3, 00:10:33.720 "num_base_bdevs_discovered": 2, 00:10:33.720 "num_base_bdevs_operational": 3, 00:10:33.720 "base_bdevs_list": [ 00:10:33.720 { 00:10:33.720 "name": "BaseBdev1", 00:10:33.720 "uuid": "55c0579d-44f3-45aa-b1e0-bb01805b5db5", 00:10:33.720 "is_configured": true, 00:10:33.720 "data_offset": 0, 00:10:33.720 "data_size": 65536 00:10:33.720 }, 00:10:33.720 { 00:10:33.720 "name": "BaseBdev2", 00:10:33.720 "uuid": "974bc579-2299-4382-8579-d4009be39032", 00:10:33.720 "is_configured": true, 00:10:33.720 "data_offset": 0, 00:10:33.720 "data_size": 65536 00:10:33.720 }, 00:10:33.720 { 00:10:33.720 "name": "BaseBdev3", 00:10:33.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.720 "is_configured": false, 00:10:33.720 "data_offset": 0, 00:10:33.720 "data_size": 0 00:10:33.720 } 00:10:33.720 ] 00:10:33.720 }' 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.720 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.979 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.979 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.238 [2024-10-11 09:44:18.676897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.238 [2024-10-11 09:44:18.677048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.238 [2024-10-11 09:44:18.677086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:34.238 [2024-10-11 09:44:18.677409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:34.238 [2024-10-11 09:44:18.677646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.238 [2024-10-11 09:44:18.677696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.238 [2024-10-11 09:44:18.678046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.238 BaseBdev3 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.238 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.238 [ 00:10:34.238 { 00:10:34.238 "name": "BaseBdev3", 00:10:34.238 "aliases": [ 00:10:34.238 "a37577fe-ac7f-4f92-ad6d-66ac1c430dec" 00:10:34.238 ], 00:10:34.238 "product_name": "Malloc disk", 00:10:34.238 "block_size": 512, 00:10:34.238 "num_blocks": 65536, 00:10:34.238 "uuid": "a37577fe-ac7f-4f92-ad6d-66ac1c430dec", 00:10:34.238 "assigned_rate_limits": { 00:10:34.238 "rw_ios_per_sec": 0, 00:10:34.238 "rw_mbytes_per_sec": 0, 00:10:34.238 "r_mbytes_per_sec": 0, 00:10:34.238 "w_mbytes_per_sec": 0 00:10:34.238 }, 00:10:34.238 "claimed": true, 00:10:34.238 "claim_type": "exclusive_write", 00:10:34.238 "zoned": false, 00:10:34.238 "supported_io_types": { 00:10:34.238 "read": true, 00:10:34.238 "write": true, 00:10:34.238 "unmap": true, 00:10:34.238 "flush": true, 00:10:34.239 "reset": true, 00:10:34.239 "nvme_admin": false, 00:10:34.239 "nvme_io": false, 00:10:34.239 "nvme_io_md": false, 00:10:34.239 "write_zeroes": true, 00:10:34.239 "zcopy": true, 00:10:34.239 "get_zone_info": false, 00:10:34.239 "zone_management": false, 00:10:34.239 "zone_append": false, 00:10:34.239 "compare": false, 00:10:34.239 "compare_and_write": false, 00:10:34.239 "abort": true, 00:10:34.239 "seek_hole": false, 00:10:34.239 "seek_data": false, 00:10:34.239 "copy": true, 00:10:34.239 "nvme_iov_md": false 00:10:34.239 }, 00:10:34.239 "memory_domains": [ 00:10:34.239 { 00:10:34.239 "dma_device_id": "system", 00:10:34.239 "dma_device_type": 1 00:10:34.239 }, 00:10:34.239 { 00:10:34.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.239 "dma_device_type": 2 00:10:34.239 } 00:10:34.239 ], 00:10:34.239 "driver_specific": {} 00:10:34.239 } 00:10:34.239 ] 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.239 "name": "Existed_Raid", 00:10:34.239 "uuid": "f4b815a9-1b91-4578-94f2-92c6c05ac78c", 00:10:34.239 "strip_size_kb": 64, 00:10:34.239 "state": "online", 00:10:34.239 "raid_level": "concat", 00:10:34.239 "superblock": false, 00:10:34.239 "num_base_bdevs": 3, 00:10:34.239 "num_base_bdevs_discovered": 3, 00:10:34.239 "num_base_bdevs_operational": 3, 00:10:34.239 "base_bdevs_list": [ 00:10:34.239 { 00:10:34.239 "name": "BaseBdev1", 00:10:34.239 "uuid": "55c0579d-44f3-45aa-b1e0-bb01805b5db5", 00:10:34.239 "is_configured": true, 00:10:34.239 "data_offset": 0, 00:10:34.239 "data_size": 65536 00:10:34.239 }, 00:10:34.239 { 00:10:34.239 "name": "BaseBdev2", 00:10:34.239 "uuid": "974bc579-2299-4382-8579-d4009be39032", 00:10:34.239 "is_configured": true, 00:10:34.239 "data_offset": 0, 00:10:34.239 "data_size": 65536 00:10:34.239 }, 00:10:34.239 { 00:10:34.239 "name": "BaseBdev3", 00:10:34.239 "uuid": "a37577fe-ac7f-4f92-ad6d-66ac1c430dec", 00:10:34.239 "is_configured": true, 00:10:34.239 "data_offset": 0, 00:10:34.239 "data_size": 65536 00:10:34.239 } 00:10:34.239 ] 00:10:34.239 }' 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.239 09:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.806 [2024-10-11 09:44:19.216420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.806 "name": "Existed_Raid", 00:10:34.806 "aliases": [ 00:10:34.806 "f4b815a9-1b91-4578-94f2-92c6c05ac78c" 00:10:34.806 ], 00:10:34.806 "product_name": "Raid Volume", 00:10:34.806 "block_size": 512, 00:10:34.806 "num_blocks": 196608, 00:10:34.806 "uuid": "f4b815a9-1b91-4578-94f2-92c6c05ac78c", 00:10:34.806 "assigned_rate_limits": { 00:10:34.806 "rw_ios_per_sec": 0, 00:10:34.806 "rw_mbytes_per_sec": 0, 00:10:34.806 "r_mbytes_per_sec": 0, 00:10:34.806 "w_mbytes_per_sec": 0 00:10:34.806 }, 00:10:34.806 "claimed": false, 00:10:34.806 "zoned": false, 00:10:34.806 "supported_io_types": { 00:10:34.806 "read": true, 00:10:34.806 "write": true, 00:10:34.806 "unmap": true, 00:10:34.806 "flush": true, 00:10:34.806 "reset": true, 00:10:34.806 "nvme_admin": false, 00:10:34.806 "nvme_io": false, 00:10:34.806 "nvme_io_md": false, 00:10:34.806 "write_zeroes": true, 00:10:34.806 "zcopy": false, 00:10:34.806 "get_zone_info": false, 00:10:34.806 "zone_management": false, 00:10:34.806 "zone_append": false, 00:10:34.806 "compare": false, 00:10:34.806 "compare_and_write": false, 00:10:34.806 "abort": false, 00:10:34.806 "seek_hole": false, 00:10:34.806 "seek_data": false, 00:10:34.806 "copy": false, 00:10:34.806 "nvme_iov_md": false 00:10:34.806 }, 00:10:34.806 "memory_domains": [ 00:10:34.806 { 00:10:34.806 "dma_device_id": "system", 00:10:34.806 "dma_device_type": 1 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.806 "dma_device_type": 2 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "dma_device_id": "system", 00:10:34.806 "dma_device_type": 1 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.806 "dma_device_type": 2 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "dma_device_id": "system", 00:10:34.806 "dma_device_type": 1 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.806 "dma_device_type": 2 00:10:34.806 } 00:10:34.806 ], 00:10:34.806 "driver_specific": { 00:10:34.806 "raid": { 00:10:34.806 "uuid": "f4b815a9-1b91-4578-94f2-92c6c05ac78c", 00:10:34.806 "strip_size_kb": 64, 00:10:34.806 "state": "online", 00:10:34.806 "raid_level": "concat", 00:10:34.806 "superblock": false, 00:10:34.806 "num_base_bdevs": 3, 00:10:34.806 "num_base_bdevs_discovered": 3, 00:10:34.806 "num_base_bdevs_operational": 3, 00:10:34.806 "base_bdevs_list": [ 00:10:34.806 { 00:10:34.806 "name": "BaseBdev1", 00:10:34.806 "uuid": "55c0579d-44f3-45aa-b1e0-bb01805b5db5", 00:10:34.806 "is_configured": true, 00:10:34.806 "data_offset": 0, 00:10:34.806 "data_size": 65536 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "name": "BaseBdev2", 00:10:34.806 "uuid": "974bc579-2299-4382-8579-d4009be39032", 00:10:34.806 "is_configured": true, 00:10:34.806 "data_offset": 0, 00:10:34.806 "data_size": 65536 00:10:34.806 }, 00:10:34.806 { 00:10:34.806 "name": "BaseBdev3", 00:10:34.806 "uuid": "a37577fe-ac7f-4f92-ad6d-66ac1c430dec", 00:10:34.806 "is_configured": true, 00:10:34.806 "data_offset": 0, 00:10:34.806 "data_size": 65536 00:10:34.806 } 00:10:34.806 ] 00:10:34.806 } 00:10:34.806 } 00:10:34.806 }' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.806 BaseBdev2 00:10:34.806 BaseBdev3' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.806 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.065 [2024-10-11 09:44:19.483940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.065 [2024-10-11 09:44:19.483970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.065 [2024-10-11 09:44:19.484030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.065 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.065 "name": "Existed_Raid", 00:10:35.065 "uuid": "f4b815a9-1b91-4578-94f2-92c6c05ac78c", 00:10:35.065 "strip_size_kb": 64, 00:10:35.065 "state": "offline", 00:10:35.065 "raid_level": "concat", 00:10:35.065 "superblock": false, 00:10:35.065 "num_base_bdevs": 3, 00:10:35.065 "num_base_bdevs_discovered": 2, 00:10:35.065 "num_base_bdevs_operational": 2, 00:10:35.065 "base_bdevs_list": [ 00:10:35.065 { 00:10:35.065 "name": null, 00:10:35.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.065 "is_configured": false, 00:10:35.065 "data_offset": 0, 00:10:35.065 "data_size": 65536 00:10:35.065 }, 00:10:35.065 { 00:10:35.065 "name": "BaseBdev2", 00:10:35.065 "uuid": "974bc579-2299-4382-8579-d4009be39032", 00:10:35.065 "is_configured": true, 00:10:35.065 "data_offset": 0, 00:10:35.065 "data_size": 65536 00:10:35.065 }, 00:10:35.065 { 00:10:35.065 "name": "BaseBdev3", 00:10:35.065 "uuid": "a37577fe-ac7f-4f92-ad6d-66ac1c430dec", 00:10:35.065 "is_configured": true, 00:10:35.065 "data_offset": 0, 00:10:35.065 "data_size": 65536 00:10:35.065 } 00:10:35.065 ] 00:10:35.065 }' 00:10:35.066 09:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.066 09:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.633 [2024-10-11 09:44:20.084060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.633 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.634 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.634 [2024-10-11 09:44:20.247133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.634 [2024-10-11 09:44:20.247190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.892 BaseBdev2 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.892 [ 00:10:35.892 { 00:10:35.892 "name": "BaseBdev2", 00:10:35.892 "aliases": [ 00:10:35.892 "8807fc8c-b458-4da6-926e-1d227cdafdde" 00:10:35.892 ], 00:10:35.892 "product_name": "Malloc disk", 00:10:35.892 "block_size": 512, 00:10:35.892 "num_blocks": 65536, 00:10:35.892 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:35.892 "assigned_rate_limits": { 00:10:35.892 "rw_ios_per_sec": 0, 00:10:35.892 "rw_mbytes_per_sec": 0, 00:10:35.892 "r_mbytes_per_sec": 0, 00:10:35.892 "w_mbytes_per_sec": 0 00:10:35.892 }, 00:10:35.892 "claimed": false, 00:10:35.892 "zoned": false, 00:10:35.892 "supported_io_types": { 00:10:35.892 "read": true, 00:10:35.892 "write": true, 00:10:35.892 "unmap": true, 00:10:35.892 "flush": true, 00:10:35.892 "reset": true, 00:10:35.892 "nvme_admin": false, 00:10:35.892 "nvme_io": false, 00:10:35.892 "nvme_io_md": false, 00:10:35.892 "write_zeroes": true, 00:10:35.892 "zcopy": true, 00:10:35.892 "get_zone_info": false, 00:10:35.892 "zone_management": false, 00:10:35.892 "zone_append": false, 00:10:35.892 "compare": false, 00:10:35.892 "compare_and_write": false, 00:10:35.892 "abort": true, 00:10:35.892 "seek_hole": false, 00:10:35.892 "seek_data": false, 00:10:35.892 "copy": true, 00:10:35.892 "nvme_iov_md": false 00:10:35.892 }, 00:10:35.892 "memory_domains": [ 00:10:35.892 { 00:10:35.892 "dma_device_id": "system", 00:10:35.892 "dma_device_type": 1 00:10:35.892 }, 00:10:35.892 { 00:10:35.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.892 "dma_device_type": 2 00:10:35.892 } 00:10:35.892 ], 00:10:35.892 "driver_specific": {} 00:10:35.892 } 00:10:35.892 ] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.892 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.151 BaseBdev3 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.151 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.151 [ 00:10:36.151 { 00:10:36.151 "name": "BaseBdev3", 00:10:36.151 "aliases": [ 00:10:36.151 "91d3fbb8-9628-49c2-bea7-4377798cedd4" 00:10:36.151 ], 00:10:36.151 "product_name": "Malloc disk", 00:10:36.151 "block_size": 512, 00:10:36.151 "num_blocks": 65536, 00:10:36.151 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:36.151 "assigned_rate_limits": { 00:10:36.151 "rw_ios_per_sec": 0, 00:10:36.151 "rw_mbytes_per_sec": 0, 00:10:36.151 "r_mbytes_per_sec": 0, 00:10:36.151 "w_mbytes_per_sec": 0 00:10:36.151 }, 00:10:36.151 "claimed": false, 00:10:36.151 "zoned": false, 00:10:36.151 "supported_io_types": { 00:10:36.151 "read": true, 00:10:36.151 "write": true, 00:10:36.151 "unmap": true, 00:10:36.151 "flush": true, 00:10:36.151 "reset": true, 00:10:36.151 "nvme_admin": false, 00:10:36.151 "nvme_io": false, 00:10:36.151 "nvme_io_md": false, 00:10:36.151 "write_zeroes": true, 00:10:36.151 "zcopy": true, 00:10:36.151 "get_zone_info": false, 00:10:36.151 "zone_management": false, 00:10:36.151 "zone_append": false, 00:10:36.151 "compare": false, 00:10:36.151 "compare_and_write": false, 00:10:36.151 "abort": true, 00:10:36.151 "seek_hole": false, 00:10:36.151 "seek_data": false, 00:10:36.151 "copy": true, 00:10:36.151 "nvme_iov_md": false 00:10:36.151 }, 00:10:36.151 "memory_domains": [ 00:10:36.151 { 00:10:36.151 "dma_device_id": "system", 00:10:36.151 "dma_device_type": 1 00:10:36.151 }, 00:10:36.151 { 00:10:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.151 "dma_device_type": 2 00:10:36.152 } 00:10:36.152 ], 00:10:36.152 "driver_specific": {} 00:10:36.152 } 00:10:36.152 ] 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.152 [2024-10-11 09:44:20.577775] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.152 [2024-10-11 09:44:20.577866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.152 [2024-10-11 09:44:20.577935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.152 [2024-10-11 09:44:20.580146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.152 "name": "Existed_Raid", 00:10:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.152 "strip_size_kb": 64, 00:10:36.152 "state": "configuring", 00:10:36.152 "raid_level": "concat", 00:10:36.152 "superblock": false, 00:10:36.152 "num_base_bdevs": 3, 00:10:36.152 "num_base_bdevs_discovered": 2, 00:10:36.152 "num_base_bdevs_operational": 3, 00:10:36.152 "base_bdevs_list": [ 00:10:36.152 { 00:10:36.152 "name": "BaseBdev1", 00:10:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.152 "is_configured": false, 00:10:36.152 "data_offset": 0, 00:10:36.152 "data_size": 0 00:10:36.152 }, 00:10:36.152 { 00:10:36.152 "name": "BaseBdev2", 00:10:36.152 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:36.152 "is_configured": true, 00:10:36.152 "data_offset": 0, 00:10:36.152 "data_size": 65536 00:10:36.152 }, 00:10:36.152 { 00:10:36.152 "name": "BaseBdev3", 00:10:36.152 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:36.152 "is_configured": true, 00:10:36.152 "data_offset": 0, 00:10:36.152 "data_size": 65536 00:10:36.152 } 00:10:36.152 ] 00:10:36.152 }' 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.152 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.410 [2024-10-11 09:44:20.977080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.410 09:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.410 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.410 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.410 "name": "Existed_Raid", 00:10:36.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.410 "strip_size_kb": 64, 00:10:36.410 "state": "configuring", 00:10:36.410 "raid_level": "concat", 00:10:36.410 "superblock": false, 00:10:36.410 "num_base_bdevs": 3, 00:10:36.410 "num_base_bdevs_discovered": 1, 00:10:36.410 "num_base_bdevs_operational": 3, 00:10:36.410 "base_bdevs_list": [ 00:10:36.410 { 00:10:36.410 "name": "BaseBdev1", 00:10:36.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.410 "is_configured": false, 00:10:36.410 "data_offset": 0, 00:10:36.410 "data_size": 0 00:10:36.410 }, 00:10:36.410 { 00:10:36.410 "name": null, 00:10:36.410 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:36.410 "is_configured": false, 00:10:36.410 "data_offset": 0, 00:10:36.410 "data_size": 65536 00:10:36.410 }, 00:10:36.410 { 00:10:36.410 "name": "BaseBdev3", 00:10:36.410 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:36.410 "is_configured": true, 00:10:36.410 "data_offset": 0, 00:10:36.410 "data_size": 65536 00:10:36.410 } 00:10:36.410 ] 00:10:36.410 }' 00:10:36.410 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.410 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.979 [2024-10-11 09:44:21.516361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.979 BaseBdev1 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.979 [ 00:10:36.979 { 00:10:36.979 "name": "BaseBdev1", 00:10:36.979 "aliases": [ 00:10:36.979 "deff3b37-57b9-476e-ba25-8fab86985c69" 00:10:36.979 ], 00:10:36.979 "product_name": "Malloc disk", 00:10:36.979 "block_size": 512, 00:10:36.979 "num_blocks": 65536, 00:10:36.979 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:36.979 "assigned_rate_limits": { 00:10:36.979 "rw_ios_per_sec": 0, 00:10:36.979 "rw_mbytes_per_sec": 0, 00:10:36.979 "r_mbytes_per_sec": 0, 00:10:36.979 "w_mbytes_per_sec": 0 00:10:36.979 }, 00:10:36.979 "claimed": true, 00:10:36.979 "claim_type": "exclusive_write", 00:10:36.979 "zoned": false, 00:10:36.979 "supported_io_types": { 00:10:36.979 "read": true, 00:10:36.979 "write": true, 00:10:36.979 "unmap": true, 00:10:36.979 "flush": true, 00:10:36.979 "reset": true, 00:10:36.979 "nvme_admin": false, 00:10:36.979 "nvme_io": false, 00:10:36.979 "nvme_io_md": false, 00:10:36.979 "write_zeroes": true, 00:10:36.979 "zcopy": true, 00:10:36.979 "get_zone_info": false, 00:10:36.979 "zone_management": false, 00:10:36.979 "zone_append": false, 00:10:36.979 "compare": false, 00:10:36.979 "compare_and_write": false, 00:10:36.979 "abort": true, 00:10:36.979 "seek_hole": false, 00:10:36.979 "seek_data": false, 00:10:36.979 "copy": true, 00:10:36.979 "nvme_iov_md": false 00:10:36.979 }, 00:10:36.979 "memory_domains": [ 00:10:36.979 { 00:10:36.979 "dma_device_id": "system", 00:10:36.979 "dma_device_type": 1 00:10:36.979 }, 00:10:36.979 { 00:10:36.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.979 "dma_device_type": 2 00:10:36.979 } 00:10:36.979 ], 00:10:36.979 "driver_specific": {} 00:10:36.979 } 00:10:36.979 ] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.979 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.238 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.238 "name": "Existed_Raid", 00:10:37.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.238 "strip_size_kb": 64, 00:10:37.238 "state": "configuring", 00:10:37.238 "raid_level": "concat", 00:10:37.238 "superblock": false, 00:10:37.238 "num_base_bdevs": 3, 00:10:37.238 "num_base_bdevs_discovered": 2, 00:10:37.238 "num_base_bdevs_operational": 3, 00:10:37.238 "base_bdevs_list": [ 00:10:37.238 { 00:10:37.238 "name": "BaseBdev1", 00:10:37.238 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:37.238 "is_configured": true, 00:10:37.238 "data_offset": 0, 00:10:37.238 "data_size": 65536 00:10:37.238 }, 00:10:37.238 { 00:10:37.238 "name": null, 00:10:37.238 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:37.238 "is_configured": false, 00:10:37.238 "data_offset": 0, 00:10:37.238 "data_size": 65536 00:10:37.238 }, 00:10:37.238 { 00:10:37.238 "name": "BaseBdev3", 00:10:37.238 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:37.238 "is_configured": true, 00:10:37.238 "data_offset": 0, 00:10:37.238 "data_size": 65536 00:10:37.238 } 00:10:37.238 ] 00:10:37.238 }' 00:10:37.239 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.239 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.497 [2024-10-11 09:44:21.991715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.497 09:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.497 "name": "Existed_Raid", 00:10:37.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.497 "strip_size_kb": 64, 00:10:37.497 "state": "configuring", 00:10:37.497 "raid_level": "concat", 00:10:37.497 "superblock": false, 00:10:37.497 "num_base_bdevs": 3, 00:10:37.497 "num_base_bdevs_discovered": 1, 00:10:37.497 "num_base_bdevs_operational": 3, 00:10:37.497 "base_bdevs_list": [ 00:10:37.497 { 00:10:37.497 "name": "BaseBdev1", 00:10:37.497 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:37.497 "is_configured": true, 00:10:37.497 "data_offset": 0, 00:10:37.497 "data_size": 65536 00:10:37.497 }, 00:10:37.497 { 00:10:37.497 "name": null, 00:10:37.497 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:37.497 "is_configured": false, 00:10:37.497 "data_offset": 0, 00:10:37.497 "data_size": 65536 00:10:37.497 }, 00:10:37.497 { 00:10:37.497 "name": null, 00:10:37.497 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:37.497 "is_configured": false, 00:10:37.497 "data_offset": 0, 00:10:37.497 "data_size": 65536 00:10:37.497 } 00:10:37.497 ] 00:10:37.497 }' 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.497 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.064 [2024-10-11 09:44:22.522917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.064 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.065 "name": "Existed_Raid", 00:10:38.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.065 "strip_size_kb": 64, 00:10:38.065 "state": "configuring", 00:10:38.065 "raid_level": "concat", 00:10:38.065 "superblock": false, 00:10:38.065 "num_base_bdevs": 3, 00:10:38.065 "num_base_bdevs_discovered": 2, 00:10:38.065 "num_base_bdevs_operational": 3, 00:10:38.065 "base_bdevs_list": [ 00:10:38.065 { 00:10:38.065 "name": "BaseBdev1", 00:10:38.065 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:38.065 "is_configured": true, 00:10:38.065 "data_offset": 0, 00:10:38.065 "data_size": 65536 00:10:38.065 }, 00:10:38.065 { 00:10:38.065 "name": null, 00:10:38.065 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:38.065 "is_configured": false, 00:10:38.065 "data_offset": 0, 00:10:38.065 "data_size": 65536 00:10:38.065 }, 00:10:38.065 { 00:10:38.065 "name": "BaseBdev3", 00:10:38.065 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:38.065 "is_configured": true, 00:10:38.065 "data_offset": 0, 00:10:38.065 "data_size": 65536 00:10:38.065 } 00:10:38.065 ] 00:10:38.065 }' 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.065 09:44:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.631 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.631 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.631 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.631 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.631 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.631 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.632 [2024-10-11 09:44:23.078013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.632 "name": "Existed_Raid", 00:10:38.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.632 "strip_size_kb": 64, 00:10:38.632 "state": "configuring", 00:10:38.632 "raid_level": "concat", 00:10:38.632 "superblock": false, 00:10:38.632 "num_base_bdevs": 3, 00:10:38.632 "num_base_bdevs_discovered": 1, 00:10:38.632 "num_base_bdevs_operational": 3, 00:10:38.632 "base_bdevs_list": [ 00:10:38.632 { 00:10:38.632 "name": null, 00:10:38.632 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:38.632 "is_configured": false, 00:10:38.632 "data_offset": 0, 00:10:38.632 "data_size": 65536 00:10:38.632 }, 00:10:38.632 { 00:10:38.632 "name": null, 00:10:38.632 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:38.632 "is_configured": false, 00:10:38.632 "data_offset": 0, 00:10:38.632 "data_size": 65536 00:10:38.632 }, 00:10:38.632 { 00:10:38.632 "name": "BaseBdev3", 00:10:38.632 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:38.632 "is_configured": true, 00:10:38.632 "data_offset": 0, 00:10:38.632 "data_size": 65536 00:10:38.632 } 00:10:38.632 ] 00:10:38.632 }' 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.632 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 [2024-10-11 09:44:23.699637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.200 "name": "Existed_Raid", 00:10:39.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.200 "strip_size_kb": 64, 00:10:39.200 "state": "configuring", 00:10:39.200 "raid_level": "concat", 00:10:39.200 "superblock": false, 00:10:39.200 "num_base_bdevs": 3, 00:10:39.200 "num_base_bdevs_discovered": 2, 00:10:39.200 "num_base_bdevs_operational": 3, 00:10:39.200 "base_bdevs_list": [ 00:10:39.200 { 00:10:39.200 "name": null, 00:10:39.200 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:39.200 "is_configured": false, 00:10:39.200 "data_offset": 0, 00:10:39.200 "data_size": 65536 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "name": "BaseBdev2", 00:10:39.200 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:39.200 "is_configured": true, 00:10:39.200 "data_offset": 0, 00:10:39.200 "data_size": 65536 00:10:39.200 }, 00:10:39.200 { 00:10:39.200 "name": "BaseBdev3", 00:10:39.200 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:39.200 "is_configured": true, 00:10:39.200 "data_offset": 0, 00:10:39.200 "data_size": 65536 00:10:39.200 } 00:10:39.200 ] 00:10:39.200 }' 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.200 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u deff3b37-57b9-476e-ba25-8fab86985c69 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.766 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.766 [2024-10-11 09:44:24.272978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.766 [2024-10-11 09:44:24.273039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.766 [2024-10-11 09:44:24.273050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:39.767 [2024-10-11 09:44:24.273353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:39.767 [2024-10-11 09:44:24.273528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.767 [2024-10-11 09:44:24.273539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:39.767 [2024-10-11 09:44:24.273855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.767 NewBaseBdev 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.767 [ 00:10:39.767 { 00:10:39.767 "name": "NewBaseBdev", 00:10:39.767 "aliases": [ 00:10:39.767 "deff3b37-57b9-476e-ba25-8fab86985c69" 00:10:39.767 ], 00:10:39.767 "product_name": "Malloc disk", 00:10:39.767 "block_size": 512, 00:10:39.767 "num_blocks": 65536, 00:10:39.767 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:39.767 "assigned_rate_limits": { 00:10:39.767 "rw_ios_per_sec": 0, 00:10:39.767 "rw_mbytes_per_sec": 0, 00:10:39.767 "r_mbytes_per_sec": 0, 00:10:39.767 "w_mbytes_per_sec": 0 00:10:39.767 }, 00:10:39.767 "claimed": true, 00:10:39.767 "claim_type": "exclusive_write", 00:10:39.767 "zoned": false, 00:10:39.767 "supported_io_types": { 00:10:39.767 "read": true, 00:10:39.767 "write": true, 00:10:39.767 "unmap": true, 00:10:39.767 "flush": true, 00:10:39.767 "reset": true, 00:10:39.767 "nvme_admin": false, 00:10:39.767 "nvme_io": false, 00:10:39.767 "nvme_io_md": false, 00:10:39.767 "write_zeroes": true, 00:10:39.767 "zcopy": true, 00:10:39.767 "get_zone_info": false, 00:10:39.767 "zone_management": false, 00:10:39.767 "zone_append": false, 00:10:39.767 "compare": false, 00:10:39.767 "compare_and_write": false, 00:10:39.767 "abort": true, 00:10:39.767 "seek_hole": false, 00:10:39.767 "seek_data": false, 00:10:39.767 "copy": true, 00:10:39.767 "nvme_iov_md": false 00:10:39.767 }, 00:10:39.767 "memory_domains": [ 00:10:39.767 { 00:10:39.767 "dma_device_id": "system", 00:10:39.767 "dma_device_type": 1 00:10:39.767 }, 00:10:39.767 { 00:10:39.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.767 "dma_device_type": 2 00:10:39.767 } 00:10:39.767 ], 00:10:39.767 "driver_specific": {} 00:10:39.767 } 00:10:39.767 ] 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.767 "name": "Existed_Raid", 00:10:39.767 "uuid": "6a3d7d01-15e9-44b0-b051-ddc3bbedbe39", 00:10:39.767 "strip_size_kb": 64, 00:10:39.767 "state": "online", 00:10:39.767 "raid_level": "concat", 00:10:39.767 "superblock": false, 00:10:39.767 "num_base_bdevs": 3, 00:10:39.767 "num_base_bdevs_discovered": 3, 00:10:39.767 "num_base_bdevs_operational": 3, 00:10:39.767 "base_bdevs_list": [ 00:10:39.767 { 00:10:39.767 "name": "NewBaseBdev", 00:10:39.767 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:39.767 "is_configured": true, 00:10:39.767 "data_offset": 0, 00:10:39.767 "data_size": 65536 00:10:39.767 }, 00:10:39.767 { 00:10:39.767 "name": "BaseBdev2", 00:10:39.767 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:39.767 "is_configured": true, 00:10:39.767 "data_offset": 0, 00:10:39.767 "data_size": 65536 00:10:39.767 }, 00:10:39.767 { 00:10:39.767 "name": "BaseBdev3", 00:10:39.767 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:39.767 "is_configured": true, 00:10:39.767 "data_offset": 0, 00:10:39.767 "data_size": 65536 00:10:39.767 } 00:10:39.767 ] 00:10:39.767 }' 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.767 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.335 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.336 [2024-10-11 09:44:24.780649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.336 "name": "Existed_Raid", 00:10:40.336 "aliases": [ 00:10:40.336 "6a3d7d01-15e9-44b0-b051-ddc3bbedbe39" 00:10:40.336 ], 00:10:40.336 "product_name": "Raid Volume", 00:10:40.336 "block_size": 512, 00:10:40.336 "num_blocks": 196608, 00:10:40.336 "uuid": "6a3d7d01-15e9-44b0-b051-ddc3bbedbe39", 00:10:40.336 "assigned_rate_limits": { 00:10:40.336 "rw_ios_per_sec": 0, 00:10:40.336 "rw_mbytes_per_sec": 0, 00:10:40.336 "r_mbytes_per_sec": 0, 00:10:40.336 "w_mbytes_per_sec": 0 00:10:40.336 }, 00:10:40.336 "claimed": false, 00:10:40.336 "zoned": false, 00:10:40.336 "supported_io_types": { 00:10:40.336 "read": true, 00:10:40.336 "write": true, 00:10:40.336 "unmap": true, 00:10:40.336 "flush": true, 00:10:40.336 "reset": true, 00:10:40.336 "nvme_admin": false, 00:10:40.336 "nvme_io": false, 00:10:40.336 "nvme_io_md": false, 00:10:40.336 "write_zeroes": true, 00:10:40.336 "zcopy": false, 00:10:40.336 "get_zone_info": false, 00:10:40.336 "zone_management": false, 00:10:40.336 "zone_append": false, 00:10:40.336 "compare": false, 00:10:40.336 "compare_and_write": false, 00:10:40.336 "abort": false, 00:10:40.336 "seek_hole": false, 00:10:40.336 "seek_data": false, 00:10:40.336 "copy": false, 00:10:40.336 "nvme_iov_md": false 00:10:40.336 }, 00:10:40.336 "memory_domains": [ 00:10:40.336 { 00:10:40.336 "dma_device_id": "system", 00:10:40.336 "dma_device_type": 1 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.336 "dma_device_type": 2 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "dma_device_id": "system", 00:10:40.336 "dma_device_type": 1 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.336 "dma_device_type": 2 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "dma_device_id": "system", 00:10:40.336 "dma_device_type": 1 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.336 "dma_device_type": 2 00:10:40.336 } 00:10:40.336 ], 00:10:40.336 "driver_specific": { 00:10:40.336 "raid": { 00:10:40.336 "uuid": "6a3d7d01-15e9-44b0-b051-ddc3bbedbe39", 00:10:40.336 "strip_size_kb": 64, 00:10:40.336 "state": "online", 00:10:40.336 "raid_level": "concat", 00:10:40.336 "superblock": false, 00:10:40.336 "num_base_bdevs": 3, 00:10:40.336 "num_base_bdevs_discovered": 3, 00:10:40.336 "num_base_bdevs_operational": 3, 00:10:40.336 "base_bdevs_list": [ 00:10:40.336 { 00:10:40.336 "name": "NewBaseBdev", 00:10:40.336 "uuid": "deff3b37-57b9-476e-ba25-8fab86985c69", 00:10:40.336 "is_configured": true, 00:10:40.336 "data_offset": 0, 00:10:40.336 "data_size": 65536 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "name": "BaseBdev2", 00:10:40.336 "uuid": "8807fc8c-b458-4da6-926e-1d227cdafdde", 00:10:40.336 "is_configured": true, 00:10:40.336 "data_offset": 0, 00:10:40.336 "data_size": 65536 00:10:40.336 }, 00:10:40.336 { 00:10:40.336 "name": "BaseBdev3", 00:10:40.336 "uuid": "91d3fbb8-9628-49c2-bea7-4377798cedd4", 00:10:40.336 "is_configured": true, 00:10:40.336 "data_offset": 0, 00:10:40.336 "data_size": 65536 00:10:40.336 } 00:10:40.336 ] 00:10:40.336 } 00:10:40.336 } 00:10:40.336 }' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:40.336 BaseBdev2 00:10:40.336 BaseBdev3' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.336 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.595 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.595 [2024-10-11 09:44:25.075884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.595 [2024-10-11 09:44:25.075970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.595 [2024-10-11 09:44:25.076084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.595 [2024-10-11 09:44:25.076152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.595 [2024-10-11 09:44:25.076167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66046 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 66046 ']' 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 66046 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66046 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66046' 00:10:40.595 killing process with pid 66046 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 66046 00:10:40.595 [2024-10-11 09:44:25.124719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.595 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 66046 00:10:40.855 [2024-10-11 09:44:25.465818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.238 00:10:42.238 real 0m11.193s 00:10:42.238 user 0m17.688s 00:10:42.238 sys 0m1.905s 00:10:42.238 ************************************ 00:10:42.238 END TEST raid_state_function_test 00:10:42.238 ************************************ 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.238 09:44:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:42.238 09:44:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.238 09:44:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.238 09:44:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.238 ************************************ 00:10:42.238 START TEST raid_state_function_test_sb 00:10:42.238 ************************************ 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66678 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66678' 00:10:42.238 Process raid pid: 66678 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66678 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66678 ']' 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.238 09:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.497 [2024-10-11 09:44:26.934182] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:42.497 [2024-10-11 09:44:26.934413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.497 [2024-10-11 09:44:27.102600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.756 [2024-10-11 09:44:27.242235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.015 [2024-10-11 09:44:27.490508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.015 [2024-10-11 09:44:27.490659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.274 [2024-10-11 09:44:27.817079] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.274 [2024-10-11 09:44:27.817145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.274 [2024-10-11 09:44:27.817163] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.274 [2024-10-11 09:44:27.817178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.274 [2024-10-11 09:44:27.817195] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.274 [2024-10-11 09:44:27.817209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.274 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.274 "name": "Existed_Raid", 00:10:43.274 "uuid": "42fa0892-e5c5-46d8-bb26-3b25a4e5a2bf", 00:10:43.274 "strip_size_kb": 64, 00:10:43.274 "state": "configuring", 00:10:43.274 "raid_level": "concat", 00:10:43.274 "superblock": true, 00:10:43.274 "num_base_bdevs": 3, 00:10:43.274 "num_base_bdevs_discovered": 0, 00:10:43.274 "num_base_bdevs_operational": 3, 00:10:43.274 "base_bdevs_list": [ 00:10:43.274 { 00:10:43.274 "name": "BaseBdev1", 00:10:43.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.275 "is_configured": false, 00:10:43.275 "data_offset": 0, 00:10:43.275 "data_size": 0 00:10:43.275 }, 00:10:43.275 { 00:10:43.275 "name": "BaseBdev2", 00:10:43.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.275 "is_configured": false, 00:10:43.275 "data_offset": 0, 00:10:43.275 "data_size": 0 00:10:43.275 }, 00:10:43.275 { 00:10:43.275 "name": "BaseBdev3", 00:10:43.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.275 "is_configured": false, 00:10:43.275 "data_offset": 0, 00:10:43.275 "data_size": 0 00:10:43.275 } 00:10:43.275 ] 00:10:43.275 }' 00:10:43.275 09:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.275 09:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 [2024-10-11 09:44:28.224314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.844 [2024-10-11 09:44:28.224426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 [2024-10-11 09:44:28.236328] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.844 [2024-10-11 09:44:28.236427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.844 [2024-10-11 09:44:28.236472] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.844 [2024-10-11 09:44:28.236509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.844 [2024-10-11 09:44:28.236538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.844 [2024-10-11 09:44:28.236608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 [2024-10-11 09:44:28.289782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.844 BaseBdev1 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 [ 00:10:43.844 { 00:10:43.844 "name": "BaseBdev1", 00:10:43.844 "aliases": [ 00:10:43.844 "6bd746f5-5dd0-400e-82be-b8707becb7bf" 00:10:43.844 ], 00:10:43.844 "product_name": "Malloc disk", 00:10:43.844 "block_size": 512, 00:10:43.844 "num_blocks": 65536, 00:10:43.844 "uuid": "6bd746f5-5dd0-400e-82be-b8707becb7bf", 00:10:43.844 "assigned_rate_limits": { 00:10:43.844 "rw_ios_per_sec": 0, 00:10:43.844 "rw_mbytes_per_sec": 0, 00:10:43.844 "r_mbytes_per_sec": 0, 00:10:43.844 "w_mbytes_per_sec": 0 00:10:43.844 }, 00:10:43.844 "claimed": true, 00:10:43.844 "claim_type": "exclusive_write", 00:10:43.844 "zoned": false, 00:10:43.844 "supported_io_types": { 00:10:43.844 "read": true, 00:10:43.844 "write": true, 00:10:43.844 "unmap": true, 00:10:43.844 "flush": true, 00:10:43.844 "reset": true, 00:10:43.844 "nvme_admin": false, 00:10:43.844 "nvme_io": false, 00:10:43.844 "nvme_io_md": false, 00:10:43.844 "write_zeroes": true, 00:10:43.844 "zcopy": true, 00:10:43.844 "get_zone_info": false, 00:10:43.844 "zone_management": false, 00:10:43.844 "zone_append": false, 00:10:43.844 "compare": false, 00:10:43.844 "compare_and_write": false, 00:10:43.844 "abort": true, 00:10:43.844 "seek_hole": false, 00:10:43.844 "seek_data": false, 00:10:43.844 "copy": true, 00:10:43.844 "nvme_iov_md": false 00:10:43.844 }, 00:10:43.844 "memory_domains": [ 00:10:43.844 { 00:10:43.844 "dma_device_id": "system", 00:10:43.844 "dma_device_type": 1 00:10:43.844 }, 00:10:43.844 { 00:10:43.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.844 "dma_device_type": 2 00:10:43.844 } 00:10:43.844 ], 00:10:43.844 "driver_specific": {} 00:10:43.844 } 00:10:43.844 ] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.844 "name": "Existed_Raid", 00:10:43.844 "uuid": "a486f694-e7dc-4945-bba0-bb1537fd51c5", 00:10:43.844 "strip_size_kb": 64, 00:10:43.844 "state": "configuring", 00:10:43.844 "raid_level": "concat", 00:10:43.844 "superblock": true, 00:10:43.844 "num_base_bdevs": 3, 00:10:43.844 "num_base_bdevs_discovered": 1, 00:10:43.844 "num_base_bdevs_operational": 3, 00:10:43.844 "base_bdevs_list": [ 00:10:43.844 { 00:10:43.844 "name": "BaseBdev1", 00:10:43.844 "uuid": "6bd746f5-5dd0-400e-82be-b8707becb7bf", 00:10:43.844 "is_configured": true, 00:10:43.844 "data_offset": 2048, 00:10:43.844 "data_size": 63488 00:10:43.844 }, 00:10:43.844 { 00:10:43.844 "name": "BaseBdev2", 00:10:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.844 "is_configured": false, 00:10:43.844 "data_offset": 0, 00:10:43.844 "data_size": 0 00:10:43.844 }, 00:10:43.844 { 00:10:43.844 "name": "BaseBdev3", 00:10:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.844 "is_configured": false, 00:10:43.844 "data_offset": 0, 00:10:43.844 "data_size": 0 00:10:43.844 } 00:10:43.844 ] 00:10:43.844 }' 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.844 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.437 [2024-10-11 09:44:28.796973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.437 [2024-10-11 09:44:28.797035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.437 [2024-10-11 09:44:28.805018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.437 [2024-10-11 09:44:28.807142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.437 [2024-10-11 09:44:28.807185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.437 [2024-10-11 09:44:28.807207] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.437 [2024-10-11 09:44:28.807233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.437 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.437 "name": "Existed_Raid", 00:10:44.437 "uuid": "141ad755-cc44-431c-9bf6-e601e56d37fe", 00:10:44.437 "strip_size_kb": 64, 00:10:44.437 "state": "configuring", 00:10:44.437 "raid_level": "concat", 00:10:44.437 "superblock": true, 00:10:44.437 "num_base_bdevs": 3, 00:10:44.437 "num_base_bdevs_discovered": 1, 00:10:44.437 "num_base_bdevs_operational": 3, 00:10:44.437 "base_bdevs_list": [ 00:10:44.437 { 00:10:44.437 "name": "BaseBdev1", 00:10:44.437 "uuid": "6bd746f5-5dd0-400e-82be-b8707becb7bf", 00:10:44.437 "is_configured": true, 00:10:44.437 "data_offset": 2048, 00:10:44.437 "data_size": 63488 00:10:44.438 }, 00:10:44.438 { 00:10:44.438 "name": "BaseBdev2", 00:10:44.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.438 "is_configured": false, 00:10:44.438 "data_offset": 0, 00:10:44.438 "data_size": 0 00:10:44.438 }, 00:10:44.438 { 00:10:44.438 "name": "BaseBdev3", 00:10:44.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.438 "is_configured": false, 00:10:44.438 "data_offset": 0, 00:10:44.438 "data_size": 0 00:10:44.438 } 00:10:44.438 ] 00:10:44.438 }' 00:10:44.438 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.438 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.697 [2024-10-11 09:44:29.303105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.697 BaseBdev2 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.697 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.956 [ 00:10:44.956 { 00:10:44.956 "name": "BaseBdev2", 00:10:44.956 "aliases": [ 00:10:44.956 "132c18ea-1928-46c7-a98d-4046e25a5c37" 00:10:44.956 ], 00:10:44.956 "product_name": "Malloc disk", 00:10:44.956 "block_size": 512, 00:10:44.956 "num_blocks": 65536, 00:10:44.956 "uuid": "132c18ea-1928-46c7-a98d-4046e25a5c37", 00:10:44.956 "assigned_rate_limits": { 00:10:44.956 "rw_ios_per_sec": 0, 00:10:44.956 "rw_mbytes_per_sec": 0, 00:10:44.956 "r_mbytes_per_sec": 0, 00:10:44.956 "w_mbytes_per_sec": 0 00:10:44.956 }, 00:10:44.956 "claimed": true, 00:10:44.956 "claim_type": "exclusive_write", 00:10:44.956 "zoned": false, 00:10:44.956 "supported_io_types": { 00:10:44.956 "read": true, 00:10:44.956 "write": true, 00:10:44.956 "unmap": true, 00:10:44.956 "flush": true, 00:10:44.956 "reset": true, 00:10:44.956 "nvme_admin": false, 00:10:44.956 "nvme_io": false, 00:10:44.956 "nvme_io_md": false, 00:10:44.956 "write_zeroes": true, 00:10:44.956 "zcopy": true, 00:10:44.956 "get_zone_info": false, 00:10:44.956 "zone_management": false, 00:10:44.956 "zone_append": false, 00:10:44.956 "compare": false, 00:10:44.956 "compare_and_write": false, 00:10:44.956 "abort": true, 00:10:44.956 "seek_hole": false, 00:10:44.956 "seek_data": false, 00:10:44.956 "copy": true, 00:10:44.956 "nvme_iov_md": false 00:10:44.956 }, 00:10:44.956 "memory_domains": [ 00:10:44.956 { 00:10:44.956 "dma_device_id": "system", 00:10:44.956 "dma_device_type": 1 00:10:44.956 }, 00:10:44.956 { 00:10:44.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.956 "dma_device_type": 2 00:10:44.956 } 00:10:44.956 ], 00:10:44.956 "driver_specific": {} 00:10:44.956 } 00:10:44.956 ] 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.956 "name": "Existed_Raid", 00:10:44.956 "uuid": "141ad755-cc44-431c-9bf6-e601e56d37fe", 00:10:44.956 "strip_size_kb": 64, 00:10:44.956 "state": "configuring", 00:10:44.956 "raid_level": "concat", 00:10:44.956 "superblock": true, 00:10:44.956 "num_base_bdevs": 3, 00:10:44.956 "num_base_bdevs_discovered": 2, 00:10:44.956 "num_base_bdevs_operational": 3, 00:10:44.956 "base_bdevs_list": [ 00:10:44.956 { 00:10:44.956 "name": "BaseBdev1", 00:10:44.956 "uuid": "6bd746f5-5dd0-400e-82be-b8707becb7bf", 00:10:44.956 "is_configured": true, 00:10:44.956 "data_offset": 2048, 00:10:44.956 "data_size": 63488 00:10:44.956 }, 00:10:44.956 { 00:10:44.956 "name": "BaseBdev2", 00:10:44.956 "uuid": "132c18ea-1928-46c7-a98d-4046e25a5c37", 00:10:44.956 "is_configured": true, 00:10:44.956 "data_offset": 2048, 00:10:44.956 "data_size": 63488 00:10:44.956 }, 00:10:44.956 { 00:10:44.956 "name": "BaseBdev3", 00:10:44.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.956 "is_configured": false, 00:10:44.956 "data_offset": 0, 00:10:44.956 "data_size": 0 00:10:44.956 } 00:10:44.956 ] 00:10:44.956 }' 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.956 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.216 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.216 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.216 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.476 BaseBdev3 00:10:45.476 [2024-10-11 09:44:29.866104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.476 [2024-10-11 09:44:29.866403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.476 [2024-10-11 09:44:29.866427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:45.476 [2024-10-11 09:44:29.866713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:45.476 [2024-10-11 09:44:29.866901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.476 [2024-10-11 09:44:29.866914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:45.476 [2024-10-11 09:44:29.867096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.476 [ 00:10:45.476 { 00:10:45.476 "name": "BaseBdev3", 00:10:45.476 "aliases": [ 00:10:45.476 "c061fbf4-359f-438f-94c9-73cf0b648115" 00:10:45.476 ], 00:10:45.476 "product_name": "Malloc disk", 00:10:45.476 "block_size": 512, 00:10:45.476 "num_blocks": 65536, 00:10:45.476 "uuid": "c061fbf4-359f-438f-94c9-73cf0b648115", 00:10:45.476 "assigned_rate_limits": { 00:10:45.476 "rw_ios_per_sec": 0, 00:10:45.476 "rw_mbytes_per_sec": 0, 00:10:45.476 "r_mbytes_per_sec": 0, 00:10:45.476 "w_mbytes_per_sec": 0 00:10:45.476 }, 00:10:45.476 "claimed": true, 00:10:45.476 "claim_type": "exclusive_write", 00:10:45.476 "zoned": false, 00:10:45.476 "supported_io_types": { 00:10:45.476 "read": true, 00:10:45.476 "write": true, 00:10:45.476 "unmap": true, 00:10:45.476 "flush": true, 00:10:45.476 "reset": true, 00:10:45.476 "nvme_admin": false, 00:10:45.476 "nvme_io": false, 00:10:45.476 "nvme_io_md": false, 00:10:45.476 "write_zeroes": true, 00:10:45.476 "zcopy": true, 00:10:45.476 "get_zone_info": false, 00:10:45.476 "zone_management": false, 00:10:45.476 "zone_append": false, 00:10:45.476 "compare": false, 00:10:45.476 "compare_and_write": false, 00:10:45.476 "abort": true, 00:10:45.476 "seek_hole": false, 00:10:45.476 "seek_data": false, 00:10:45.476 "copy": true, 00:10:45.476 "nvme_iov_md": false 00:10:45.476 }, 00:10:45.476 "memory_domains": [ 00:10:45.476 { 00:10:45.476 "dma_device_id": "system", 00:10:45.476 "dma_device_type": 1 00:10:45.476 }, 00:10:45.476 { 00:10:45.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.476 "dma_device_type": 2 00:10:45.476 } 00:10:45.476 ], 00:10:45.476 "driver_specific": {} 00:10:45.476 } 00:10:45.476 ] 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.476 "name": "Existed_Raid", 00:10:45.476 "uuid": "141ad755-cc44-431c-9bf6-e601e56d37fe", 00:10:45.476 "strip_size_kb": 64, 00:10:45.476 "state": "online", 00:10:45.476 "raid_level": "concat", 00:10:45.476 "superblock": true, 00:10:45.476 "num_base_bdevs": 3, 00:10:45.476 "num_base_bdevs_discovered": 3, 00:10:45.476 "num_base_bdevs_operational": 3, 00:10:45.476 "base_bdevs_list": [ 00:10:45.476 { 00:10:45.476 "name": "BaseBdev1", 00:10:45.476 "uuid": "6bd746f5-5dd0-400e-82be-b8707becb7bf", 00:10:45.476 "is_configured": true, 00:10:45.476 "data_offset": 2048, 00:10:45.476 "data_size": 63488 00:10:45.476 }, 00:10:45.476 { 00:10:45.476 "name": "BaseBdev2", 00:10:45.476 "uuid": "132c18ea-1928-46c7-a98d-4046e25a5c37", 00:10:45.476 "is_configured": true, 00:10:45.476 "data_offset": 2048, 00:10:45.476 "data_size": 63488 00:10:45.476 }, 00:10:45.476 { 00:10:45.476 "name": "BaseBdev3", 00:10:45.476 "uuid": "c061fbf4-359f-438f-94c9-73cf0b648115", 00:10:45.476 "is_configured": true, 00:10:45.476 "data_offset": 2048, 00:10:45.476 "data_size": 63488 00:10:45.476 } 00:10:45.476 ] 00:10:45.476 }' 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.476 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.736 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.995 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.995 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.995 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.995 [2024-10-11 09:44:30.373701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.996 "name": "Existed_Raid", 00:10:45.996 "aliases": [ 00:10:45.996 "141ad755-cc44-431c-9bf6-e601e56d37fe" 00:10:45.996 ], 00:10:45.996 "product_name": "Raid Volume", 00:10:45.996 "block_size": 512, 00:10:45.996 "num_blocks": 190464, 00:10:45.996 "uuid": "141ad755-cc44-431c-9bf6-e601e56d37fe", 00:10:45.996 "assigned_rate_limits": { 00:10:45.996 "rw_ios_per_sec": 0, 00:10:45.996 "rw_mbytes_per_sec": 0, 00:10:45.996 "r_mbytes_per_sec": 0, 00:10:45.996 "w_mbytes_per_sec": 0 00:10:45.996 }, 00:10:45.996 "claimed": false, 00:10:45.996 "zoned": false, 00:10:45.996 "supported_io_types": { 00:10:45.996 "read": true, 00:10:45.996 "write": true, 00:10:45.996 "unmap": true, 00:10:45.996 "flush": true, 00:10:45.996 "reset": true, 00:10:45.996 "nvme_admin": false, 00:10:45.996 "nvme_io": false, 00:10:45.996 "nvme_io_md": false, 00:10:45.996 "write_zeroes": true, 00:10:45.996 "zcopy": false, 00:10:45.996 "get_zone_info": false, 00:10:45.996 "zone_management": false, 00:10:45.996 "zone_append": false, 00:10:45.996 "compare": false, 00:10:45.996 "compare_and_write": false, 00:10:45.996 "abort": false, 00:10:45.996 "seek_hole": false, 00:10:45.996 "seek_data": false, 00:10:45.996 "copy": false, 00:10:45.996 "nvme_iov_md": false 00:10:45.996 }, 00:10:45.996 "memory_domains": [ 00:10:45.996 { 00:10:45.996 "dma_device_id": "system", 00:10:45.996 "dma_device_type": 1 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.996 "dma_device_type": 2 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "dma_device_id": "system", 00:10:45.996 "dma_device_type": 1 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.996 "dma_device_type": 2 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "dma_device_id": "system", 00:10:45.996 "dma_device_type": 1 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.996 "dma_device_type": 2 00:10:45.996 } 00:10:45.996 ], 00:10:45.996 "driver_specific": { 00:10:45.996 "raid": { 00:10:45.996 "uuid": "141ad755-cc44-431c-9bf6-e601e56d37fe", 00:10:45.996 "strip_size_kb": 64, 00:10:45.996 "state": "online", 00:10:45.996 "raid_level": "concat", 00:10:45.996 "superblock": true, 00:10:45.996 "num_base_bdevs": 3, 00:10:45.996 "num_base_bdevs_discovered": 3, 00:10:45.996 "num_base_bdevs_operational": 3, 00:10:45.996 "base_bdevs_list": [ 00:10:45.996 { 00:10:45.996 "name": "BaseBdev1", 00:10:45.996 "uuid": "6bd746f5-5dd0-400e-82be-b8707becb7bf", 00:10:45.996 "is_configured": true, 00:10:45.996 "data_offset": 2048, 00:10:45.996 "data_size": 63488 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "name": "BaseBdev2", 00:10:45.996 "uuid": "132c18ea-1928-46c7-a98d-4046e25a5c37", 00:10:45.996 "is_configured": true, 00:10:45.996 "data_offset": 2048, 00:10:45.996 "data_size": 63488 00:10:45.996 }, 00:10:45.996 { 00:10:45.996 "name": "BaseBdev3", 00:10:45.996 "uuid": "c061fbf4-359f-438f-94c9-73cf0b648115", 00:10:45.996 "is_configured": true, 00:10:45.996 "data_offset": 2048, 00:10:45.996 "data_size": 63488 00:10:45.996 } 00:10:45.996 ] 00:10:45.996 } 00:10:45.996 } 00:10:45.996 }' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.996 BaseBdev2 00:10:45.996 BaseBdev3' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.996 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.256 [2024-10-11 09:44:30.664912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.256 [2024-10-11 09:44:30.664942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.256 [2024-10-11 09:44:30.664998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.256 "name": "Existed_Raid", 00:10:46.256 "uuid": "141ad755-cc44-431c-9bf6-e601e56d37fe", 00:10:46.256 "strip_size_kb": 64, 00:10:46.256 "state": "offline", 00:10:46.256 "raid_level": "concat", 00:10:46.256 "superblock": true, 00:10:46.256 "num_base_bdevs": 3, 00:10:46.256 "num_base_bdevs_discovered": 2, 00:10:46.256 "num_base_bdevs_operational": 2, 00:10:46.256 "base_bdevs_list": [ 00:10:46.256 { 00:10:46.256 "name": null, 00:10:46.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.256 "is_configured": false, 00:10:46.256 "data_offset": 0, 00:10:46.256 "data_size": 63488 00:10:46.256 }, 00:10:46.256 { 00:10:46.256 "name": "BaseBdev2", 00:10:46.256 "uuid": "132c18ea-1928-46c7-a98d-4046e25a5c37", 00:10:46.256 "is_configured": true, 00:10:46.256 "data_offset": 2048, 00:10:46.256 "data_size": 63488 00:10:46.256 }, 00:10:46.256 { 00:10:46.256 "name": "BaseBdev3", 00:10:46.256 "uuid": "c061fbf4-359f-438f-94c9-73cf0b648115", 00:10:46.256 "is_configured": true, 00:10:46.256 "data_offset": 2048, 00:10:46.256 "data_size": 63488 00:10:46.256 } 00:10:46.256 ] 00:10:46.256 }' 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.256 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.825 [2024-10-11 09:44:31.300341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.825 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.825 [2024-10-11 09:44:31.453887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.825 [2024-10-11 09:44:31.454002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.085 BaseBdev2 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.085 [ 00:10:47.085 { 00:10:47.085 "name": "BaseBdev2", 00:10:47.085 "aliases": [ 00:10:47.085 "6aa7ba9b-9116-4871-ac23-007b42446694" 00:10:47.085 ], 00:10:47.085 "product_name": "Malloc disk", 00:10:47.085 "block_size": 512, 00:10:47.085 "num_blocks": 65536, 00:10:47.085 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:47.085 "assigned_rate_limits": { 00:10:47.085 "rw_ios_per_sec": 0, 00:10:47.085 "rw_mbytes_per_sec": 0, 00:10:47.085 "r_mbytes_per_sec": 0, 00:10:47.085 "w_mbytes_per_sec": 0 00:10:47.085 }, 00:10:47.085 "claimed": false, 00:10:47.085 "zoned": false, 00:10:47.085 "supported_io_types": { 00:10:47.085 "read": true, 00:10:47.085 "write": true, 00:10:47.085 "unmap": true, 00:10:47.085 "flush": true, 00:10:47.085 "reset": true, 00:10:47.085 "nvme_admin": false, 00:10:47.085 "nvme_io": false, 00:10:47.085 "nvme_io_md": false, 00:10:47.085 "write_zeroes": true, 00:10:47.085 "zcopy": true, 00:10:47.085 "get_zone_info": false, 00:10:47.085 "zone_management": false, 00:10:47.085 "zone_append": false, 00:10:47.085 "compare": false, 00:10:47.085 "compare_and_write": false, 00:10:47.085 "abort": true, 00:10:47.085 "seek_hole": false, 00:10:47.085 "seek_data": false, 00:10:47.085 "copy": true, 00:10:47.085 "nvme_iov_md": false 00:10:47.085 }, 00:10:47.085 "memory_domains": [ 00:10:47.085 { 00:10:47.085 "dma_device_id": "system", 00:10:47.085 "dma_device_type": 1 00:10:47.085 }, 00:10:47.085 { 00:10:47.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.085 "dma_device_type": 2 00:10:47.085 } 00:10:47.085 ], 00:10:47.085 "driver_specific": {} 00:10:47.085 } 00:10:47.085 ] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.085 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.344 BaseBdev3 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.344 [ 00:10:47.344 { 00:10:47.344 "name": "BaseBdev3", 00:10:47.344 "aliases": [ 00:10:47.344 "b4c50d48-1691-4076-b788-3d015a7f713c" 00:10:47.344 ], 00:10:47.344 "product_name": "Malloc disk", 00:10:47.344 "block_size": 512, 00:10:47.344 "num_blocks": 65536, 00:10:47.344 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:47.344 "assigned_rate_limits": { 00:10:47.344 "rw_ios_per_sec": 0, 00:10:47.344 "rw_mbytes_per_sec": 0, 00:10:47.344 "r_mbytes_per_sec": 0, 00:10:47.344 "w_mbytes_per_sec": 0 00:10:47.344 }, 00:10:47.344 "claimed": false, 00:10:47.344 "zoned": false, 00:10:47.344 "supported_io_types": { 00:10:47.344 "read": true, 00:10:47.344 "write": true, 00:10:47.344 "unmap": true, 00:10:47.344 "flush": true, 00:10:47.344 "reset": true, 00:10:47.344 "nvme_admin": false, 00:10:47.344 "nvme_io": false, 00:10:47.344 "nvme_io_md": false, 00:10:47.344 "write_zeroes": true, 00:10:47.344 "zcopy": true, 00:10:47.344 "get_zone_info": false, 00:10:47.344 "zone_management": false, 00:10:47.344 "zone_append": false, 00:10:47.344 "compare": false, 00:10:47.344 "compare_and_write": false, 00:10:47.344 "abort": true, 00:10:47.344 "seek_hole": false, 00:10:47.344 "seek_data": false, 00:10:47.344 "copy": true, 00:10:47.344 "nvme_iov_md": false 00:10:47.344 }, 00:10:47.344 "memory_domains": [ 00:10:47.344 { 00:10:47.344 "dma_device_id": "system", 00:10:47.344 "dma_device_type": 1 00:10:47.344 }, 00:10:47.344 { 00:10:47.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.344 "dma_device_type": 2 00:10:47.344 } 00:10:47.344 ], 00:10:47.344 "driver_specific": {} 00:10:47.344 } 00:10:47.344 ] 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.344 [2024-10-11 09:44:31.784135] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.344 [2024-10-11 09:44:31.784225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.344 [2024-10-11 09:44:31.784273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.344 [2024-10-11 09:44:31.786218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.344 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.345 "name": "Existed_Raid", 00:10:47.345 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:47.345 "strip_size_kb": 64, 00:10:47.345 "state": "configuring", 00:10:47.345 "raid_level": "concat", 00:10:47.345 "superblock": true, 00:10:47.345 "num_base_bdevs": 3, 00:10:47.345 "num_base_bdevs_discovered": 2, 00:10:47.345 "num_base_bdevs_operational": 3, 00:10:47.345 "base_bdevs_list": [ 00:10:47.345 { 00:10:47.345 "name": "BaseBdev1", 00:10:47.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.345 "is_configured": false, 00:10:47.345 "data_offset": 0, 00:10:47.345 "data_size": 0 00:10:47.345 }, 00:10:47.345 { 00:10:47.345 "name": "BaseBdev2", 00:10:47.345 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:47.345 "is_configured": true, 00:10:47.345 "data_offset": 2048, 00:10:47.345 "data_size": 63488 00:10:47.345 }, 00:10:47.345 { 00:10:47.345 "name": "BaseBdev3", 00:10:47.345 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:47.345 "is_configured": true, 00:10:47.345 "data_offset": 2048, 00:10:47.345 "data_size": 63488 00:10:47.345 } 00:10:47.345 ] 00:10:47.345 }' 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.345 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 [2024-10-11 09:44:32.259372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.913 "name": "Existed_Raid", 00:10:47.913 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:47.913 "strip_size_kb": 64, 00:10:47.913 "state": "configuring", 00:10:47.913 "raid_level": "concat", 00:10:47.913 "superblock": true, 00:10:47.913 "num_base_bdevs": 3, 00:10:47.913 "num_base_bdevs_discovered": 1, 00:10:47.913 "num_base_bdevs_operational": 3, 00:10:47.913 "base_bdevs_list": [ 00:10:47.913 { 00:10:47.913 "name": "BaseBdev1", 00:10:47.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.913 "is_configured": false, 00:10:47.913 "data_offset": 0, 00:10:47.913 "data_size": 0 00:10:47.913 }, 00:10:47.913 { 00:10:47.913 "name": null, 00:10:47.913 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:47.913 "is_configured": false, 00:10:47.913 "data_offset": 0, 00:10:47.913 "data_size": 63488 00:10:47.913 }, 00:10:47.913 { 00:10:47.913 "name": "BaseBdev3", 00:10:47.913 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:47.913 "is_configured": true, 00:10:47.913 "data_offset": 2048, 00:10:47.913 "data_size": 63488 00:10:47.913 } 00:10:47.913 ] 00:10:47.913 }' 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.913 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.172 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.430 [2024-10-11 09:44:32.816073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.430 BaseBdev1 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.430 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.430 [ 00:10:48.430 { 00:10:48.430 "name": "BaseBdev1", 00:10:48.430 "aliases": [ 00:10:48.430 "cc2fdefd-e2f7-489f-a75d-48275700bc39" 00:10:48.430 ], 00:10:48.430 "product_name": "Malloc disk", 00:10:48.430 "block_size": 512, 00:10:48.430 "num_blocks": 65536, 00:10:48.430 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:48.430 "assigned_rate_limits": { 00:10:48.430 "rw_ios_per_sec": 0, 00:10:48.430 "rw_mbytes_per_sec": 0, 00:10:48.430 "r_mbytes_per_sec": 0, 00:10:48.430 "w_mbytes_per_sec": 0 00:10:48.430 }, 00:10:48.430 "claimed": true, 00:10:48.430 "claim_type": "exclusive_write", 00:10:48.430 "zoned": false, 00:10:48.430 "supported_io_types": { 00:10:48.430 "read": true, 00:10:48.430 "write": true, 00:10:48.430 "unmap": true, 00:10:48.430 "flush": true, 00:10:48.430 "reset": true, 00:10:48.430 "nvme_admin": false, 00:10:48.430 "nvme_io": false, 00:10:48.430 "nvme_io_md": false, 00:10:48.430 "write_zeroes": true, 00:10:48.430 "zcopy": true, 00:10:48.430 "get_zone_info": false, 00:10:48.430 "zone_management": false, 00:10:48.430 "zone_append": false, 00:10:48.430 "compare": false, 00:10:48.430 "compare_and_write": false, 00:10:48.430 "abort": true, 00:10:48.430 "seek_hole": false, 00:10:48.431 "seek_data": false, 00:10:48.431 "copy": true, 00:10:48.431 "nvme_iov_md": false 00:10:48.431 }, 00:10:48.431 "memory_domains": [ 00:10:48.431 { 00:10:48.431 "dma_device_id": "system", 00:10:48.431 "dma_device_type": 1 00:10:48.431 }, 00:10:48.431 { 00:10:48.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.431 "dma_device_type": 2 00:10:48.431 } 00:10:48.431 ], 00:10:48.431 "driver_specific": {} 00:10:48.431 } 00:10:48.431 ] 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.431 "name": "Existed_Raid", 00:10:48.431 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:48.431 "strip_size_kb": 64, 00:10:48.431 "state": "configuring", 00:10:48.431 "raid_level": "concat", 00:10:48.431 "superblock": true, 00:10:48.431 "num_base_bdevs": 3, 00:10:48.431 "num_base_bdevs_discovered": 2, 00:10:48.431 "num_base_bdevs_operational": 3, 00:10:48.431 "base_bdevs_list": [ 00:10:48.431 { 00:10:48.431 "name": "BaseBdev1", 00:10:48.431 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:48.431 "is_configured": true, 00:10:48.431 "data_offset": 2048, 00:10:48.431 "data_size": 63488 00:10:48.431 }, 00:10:48.431 { 00:10:48.431 "name": null, 00:10:48.431 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:48.431 "is_configured": false, 00:10:48.431 "data_offset": 0, 00:10:48.431 "data_size": 63488 00:10:48.431 }, 00:10:48.431 { 00:10:48.431 "name": "BaseBdev3", 00:10:48.431 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:48.431 "is_configured": true, 00:10:48.431 "data_offset": 2048, 00:10:48.431 "data_size": 63488 00:10:48.431 } 00:10:48.431 ] 00:10:48.431 }' 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.431 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.690 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.690 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.690 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.949 [2024-10-11 09:44:33.355300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.949 "name": "Existed_Raid", 00:10:48.949 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:48.949 "strip_size_kb": 64, 00:10:48.949 "state": "configuring", 00:10:48.949 "raid_level": "concat", 00:10:48.949 "superblock": true, 00:10:48.949 "num_base_bdevs": 3, 00:10:48.949 "num_base_bdevs_discovered": 1, 00:10:48.949 "num_base_bdevs_operational": 3, 00:10:48.949 "base_bdevs_list": [ 00:10:48.949 { 00:10:48.949 "name": "BaseBdev1", 00:10:48.949 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:48.949 "is_configured": true, 00:10:48.949 "data_offset": 2048, 00:10:48.949 "data_size": 63488 00:10:48.949 }, 00:10:48.949 { 00:10:48.949 "name": null, 00:10:48.949 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:48.949 "is_configured": false, 00:10:48.949 "data_offset": 0, 00:10:48.949 "data_size": 63488 00:10:48.949 }, 00:10:48.949 { 00:10:48.949 "name": null, 00:10:48.949 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:48.949 "is_configured": false, 00:10:48.949 "data_offset": 0, 00:10:48.949 "data_size": 63488 00:10:48.949 } 00:10:48.949 ] 00:10:48.949 }' 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.949 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.208 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.208 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.208 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.208 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.209 [2024-10-11 09:44:33.830575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.209 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.468 "name": "Existed_Raid", 00:10:49.468 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:49.468 "strip_size_kb": 64, 00:10:49.468 "state": "configuring", 00:10:49.468 "raid_level": "concat", 00:10:49.468 "superblock": true, 00:10:49.468 "num_base_bdevs": 3, 00:10:49.468 "num_base_bdevs_discovered": 2, 00:10:49.468 "num_base_bdevs_operational": 3, 00:10:49.468 "base_bdevs_list": [ 00:10:49.468 { 00:10:49.468 "name": "BaseBdev1", 00:10:49.468 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:49.468 "is_configured": true, 00:10:49.468 "data_offset": 2048, 00:10:49.468 "data_size": 63488 00:10:49.468 }, 00:10:49.468 { 00:10:49.468 "name": null, 00:10:49.468 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:49.468 "is_configured": false, 00:10:49.468 "data_offset": 0, 00:10:49.468 "data_size": 63488 00:10:49.468 }, 00:10:49.468 { 00:10:49.468 "name": "BaseBdev3", 00:10:49.468 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:49.468 "is_configured": true, 00:10:49.468 "data_offset": 2048, 00:10:49.468 "data_size": 63488 00:10:49.468 } 00:10:49.468 ] 00:10:49.468 }' 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.468 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.728 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.728 [2024-10-11 09:44:34.349701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.987 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.987 "name": "Existed_Raid", 00:10:49.987 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:49.987 "strip_size_kb": 64, 00:10:49.987 "state": "configuring", 00:10:49.987 "raid_level": "concat", 00:10:49.987 "superblock": true, 00:10:49.987 "num_base_bdevs": 3, 00:10:49.987 "num_base_bdevs_discovered": 1, 00:10:49.987 "num_base_bdevs_operational": 3, 00:10:49.987 "base_bdevs_list": [ 00:10:49.987 { 00:10:49.987 "name": null, 00:10:49.987 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:49.987 "is_configured": false, 00:10:49.987 "data_offset": 0, 00:10:49.987 "data_size": 63488 00:10:49.987 }, 00:10:49.987 { 00:10:49.987 "name": null, 00:10:49.987 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:49.987 "is_configured": false, 00:10:49.987 "data_offset": 0, 00:10:49.987 "data_size": 63488 00:10:49.987 }, 00:10:49.987 { 00:10:49.987 "name": "BaseBdev3", 00:10:49.987 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:49.987 "is_configured": true, 00:10:49.987 "data_offset": 2048, 00:10:49.988 "data_size": 63488 00:10:49.988 } 00:10:49.988 ] 00:10:49.988 }' 00:10:49.988 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.988 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 [2024-10-11 09:44:34.949644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.556 "name": "Existed_Raid", 00:10:50.556 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:50.556 "strip_size_kb": 64, 00:10:50.556 "state": "configuring", 00:10:50.556 "raid_level": "concat", 00:10:50.556 "superblock": true, 00:10:50.556 "num_base_bdevs": 3, 00:10:50.556 "num_base_bdevs_discovered": 2, 00:10:50.556 "num_base_bdevs_operational": 3, 00:10:50.556 "base_bdevs_list": [ 00:10:50.556 { 00:10:50.556 "name": null, 00:10:50.556 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:50.556 "is_configured": false, 00:10:50.556 "data_offset": 0, 00:10:50.556 "data_size": 63488 00:10:50.556 }, 00:10:50.556 { 00:10:50.556 "name": "BaseBdev2", 00:10:50.556 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:50.556 "is_configured": true, 00:10:50.556 "data_offset": 2048, 00:10:50.556 "data_size": 63488 00:10:50.556 }, 00:10:50.556 { 00:10:50.556 "name": "BaseBdev3", 00:10:50.556 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:50.556 "is_configured": true, 00:10:50.556 "data_offset": 2048, 00:10:50.556 "data_size": 63488 00:10:50.556 } 00:10:50.556 ] 00:10:50.556 }' 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.556 09:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.841 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.841 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.841 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.841 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.841 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.100 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc2fdefd-e2f7-489f-a75d-48275700bc39 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.101 [2024-10-11 09:44:35.587146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:51.101 [2024-10-11 09:44:35.587406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.101 [2024-10-11 09:44:35.587424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:51.101 [2024-10-11 09:44:35.587700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:51.101 NewBaseBdev 00:10:51.101 [2024-10-11 09:44:35.587933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.101 [2024-10-11 09:44:35.587946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:51.101 [2024-10-11 09:44:35.588100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.101 [ 00:10:51.101 { 00:10:51.101 "name": "NewBaseBdev", 00:10:51.101 "aliases": [ 00:10:51.101 "cc2fdefd-e2f7-489f-a75d-48275700bc39" 00:10:51.101 ], 00:10:51.101 "product_name": "Malloc disk", 00:10:51.101 "block_size": 512, 00:10:51.101 "num_blocks": 65536, 00:10:51.101 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:51.101 "assigned_rate_limits": { 00:10:51.101 "rw_ios_per_sec": 0, 00:10:51.101 "rw_mbytes_per_sec": 0, 00:10:51.101 "r_mbytes_per_sec": 0, 00:10:51.101 "w_mbytes_per_sec": 0 00:10:51.101 }, 00:10:51.101 "claimed": true, 00:10:51.101 "claim_type": "exclusive_write", 00:10:51.101 "zoned": false, 00:10:51.101 "supported_io_types": { 00:10:51.101 "read": true, 00:10:51.101 "write": true, 00:10:51.101 "unmap": true, 00:10:51.101 "flush": true, 00:10:51.101 "reset": true, 00:10:51.101 "nvme_admin": false, 00:10:51.101 "nvme_io": false, 00:10:51.101 "nvme_io_md": false, 00:10:51.101 "write_zeroes": true, 00:10:51.101 "zcopy": true, 00:10:51.101 "get_zone_info": false, 00:10:51.101 "zone_management": false, 00:10:51.101 "zone_append": false, 00:10:51.101 "compare": false, 00:10:51.101 "compare_and_write": false, 00:10:51.101 "abort": true, 00:10:51.101 "seek_hole": false, 00:10:51.101 "seek_data": false, 00:10:51.101 "copy": true, 00:10:51.101 "nvme_iov_md": false 00:10:51.101 }, 00:10:51.101 "memory_domains": [ 00:10:51.101 { 00:10:51.101 "dma_device_id": "system", 00:10:51.101 "dma_device_type": 1 00:10:51.101 }, 00:10:51.101 { 00:10:51.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.101 "dma_device_type": 2 00:10:51.101 } 00:10:51.101 ], 00:10:51.101 "driver_specific": {} 00:10:51.101 } 00:10:51.101 ] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.101 "name": "Existed_Raid", 00:10:51.101 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:51.101 "strip_size_kb": 64, 00:10:51.101 "state": "online", 00:10:51.101 "raid_level": "concat", 00:10:51.101 "superblock": true, 00:10:51.101 "num_base_bdevs": 3, 00:10:51.101 "num_base_bdevs_discovered": 3, 00:10:51.101 "num_base_bdevs_operational": 3, 00:10:51.101 "base_bdevs_list": [ 00:10:51.101 { 00:10:51.101 "name": "NewBaseBdev", 00:10:51.101 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:51.101 "is_configured": true, 00:10:51.101 "data_offset": 2048, 00:10:51.101 "data_size": 63488 00:10:51.101 }, 00:10:51.101 { 00:10:51.101 "name": "BaseBdev2", 00:10:51.101 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:51.101 "is_configured": true, 00:10:51.101 "data_offset": 2048, 00:10:51.101 "data_size": 63488 00:10:51.101 }, 00:10:51.101 { 00:10:51.101 "name": "BaseBdev3", 00:10:51.101 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:51.101 "is_configured": true, 00:10:51.101 "data_offset": 2048, 00:10:51.101 "data_size": 63488 00:10:51.101 } 00:10:51.101 ] 00:10:51.101 }' 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.101 09:44:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.684 [2024-10-11 09:44:36.110664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.684 "name": "Existed_Raid", 00:10:51.684 "aliases": [ 00:10:51.684 "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1" 00:10:51.684 ], 00:10:51.684 "product_name": "Raid Volume", 00:10:51.684 "block_size": 512, 00:10:51.684 "num_blocks": 190464, 00:10:51.684 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:51.684 "assigned_rate_limits": { 00:10:51.684 "rw_ios_per_sec": 0, 00:10:51.684 "rw_mbytes_per_sec": 0, 00:10:51.684 "r_mbytes_per_sec": 0, 00:10:51.684 "w_mbytes_per_sec": 0 00:10:51.684 }, 00:10:51.684 "claimed": false, 00:10:51.684 "zoned": false, 00:10:51.684 "supported_io_types": { 00:10:51.684 "read": true, 00:10:51.684 "write": true, 00:10:51.684 "unmap": true, 00:10:51.684 "flush": true, 00:10:51.684 "reset": true, 00:10:51.684 "nvme_admin": false, 00:10:51.684 "nvme_io": false, 00:10:51.684 "nvme_io_md": false, 00:10:51.684 "write_zeroes": true, 00:10:51.684 "zcopy": false, 00:10:51.684 "get_zone_info": false, 00:10:51.684 "zone_management": false, 00:10:51.684 "zone_append": false, 00:10:51.684 "compare": false, 00:10:51.684 "compare_and_write": false, 00:10:51.684 "abort": false, 00:10:51.684 "seek_hole": false, 00:10:51.684 "seek_data": false, 00:10:51.684 "copy": false, 00:10:51.684 "nvme_iov_md": false 00:10:51.684 }, 00:10:51.684 "memory_domains": [ 00:10:51.684 { 00:10:51.684 "dma_device_id": "system", 00:10:51.684 "dma_device_type": 1 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.684 "dma_device_type": 2 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "dma_device_id": "system", 00:10:51.684 "dma_device_type": 1 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.684 "dma_device_type": 2 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "dma_device_id": "system", 00:10:51.684 "dma_device_type": 1 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.684 "dma_device_type": 2 00:10:51.684 } 00:10:51.684 ], 00:10:51.684 "driver_specific": { 00:10:51.684 "raid": { 00:10:51.684 "uuid": "6ff93060-ded8-4bd6-88b3-ac9eec34e3f1", 00:10:51.684 "strip_size_kb": 64, 00:10:51.684 "state": "online", 00:10:51.684 "raid_level": "concat", 00:10:51.684 "superblock": true, 00:10:51.684 "num_base_bdevs": 3, 00:10:51.684 "num_base_bdevs_discovered": 3, 00:10:51.684 "num_base_bdevs_operational": 3, 00:10:51.684 "base_bdevs_list": [ 00:10:51.684 { 00:10:51.684 "name": "NewBaseBdev", 00:10:51.684 "uuid": "cc2fdefd-e2f7-489f-a75d-48275700bc39", 00:10:51.684 "is_configured": true, 00:10:51.684 "data_offset": 2048, 00:10:51.684 "data_size": 63488 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "name": "BaseBdev2", 00:10:51.684 "uuid": "6aa7ba9b-9116-4871-ac23-007b42446694", 00:10:51.684 "is_configured": true, 00:10:51.684 "data_offset": 2048, 00:10:51.684 "data_size": 63488 00:10:51.684 }, 00:10:51.684 { 00:10:51.684 "name": "BaseBdev3", 00:10:51.684 "uuid": "b4c50d48-1691-4076-b788-3d015a7f713c", 00:10:51.684 "is_configured": true, 00:10:51.684 "data_offset": 2048, 00:10:51.684 "data_size": 63488 00:10:51.684 } 00:10:51.684 ] 00:10:51.684 } 00:10:51.684 } 00:10:51.684 }' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:51.684 BaseBdev2 00:10:51.684 BaseBdev3' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.684 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 [2024-10-11 09:44:36.385977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.944 [2024-10-11 09:44:36.386015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.944 [2024-10-11 09:44:36.386125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.944 [2024-10-11 09:44:36.386188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.944 [2024-10-11 09:44:36.386201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66678 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66678 ']' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66678 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66678 00:10:51.944 killing process with pid 66678 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66678' 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66678 00:10:51.944 [2024-10-11 09:44:36.435219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.944 09:44:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66678 00:10:52.202 [2024-10-11 09:44:36.747647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.580 09:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.580 00:10:53.580 real 0m11.070s 00:10:53.580 user 0m17.643s 00:10:53.580 sys 0m1.901s 00:10:53.580 09:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.580 ************************************ 00:10:53.580 END TEST raid_state_function_test_sb 00:10:53.580 ************************************ 00:10:53.580 09:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.580 09:44:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:53.580 09:44:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:53.580 09:44:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.580 09:44:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.580 ************************************ 00:10:53.580 START TEST raid_superblock_test 00:10:53.580 ************************************ 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:53.580 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67305 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67305 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 67305 ']' 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.581 09:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:53.581 [2024-10-11 09:44:38.040827] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:53.581 [2024-10-11 09:44:38.041078] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67305 ] 00:10:53.581 [2024-10-11 09:44:38.206208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.909 [2024-10-11 09:44:38.335901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.168 [2024-10-11 09:44:38.574519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.168 [2024-10-11 09:44:38.574594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 malloc1 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 [2024-10-11 09:44:38.968307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.428 [2024-10-11 09:44:38.968456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.428 [2024-10-11 09:44:38.968521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:54.428 [2024-10-11 09:44:38.968565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.428 [2024-10-11 09:44:38.970944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.428 [2024-10-11 09:44:38.971016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.428 pt1 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 malloc2 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 [2024-10-11 09:44:39.033009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:54.428 [2024-10-11 09:44:39.033070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.428 [2024-10-11 09:44:39.033122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:54.428 [2024-10-11 09:44:39.033131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.428 [2024-10-11 09:44:39.035387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.428 [2024-10-11 09:44:39.035427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:54.428 pt2 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 malloc3 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 [2024-10-11 09:44:39.109454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:54.688 [2024-10-11 09:44:39.109565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.688 [2024-10-11 09:44:39.109607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:54.688 [2024-10-11 09:44:39.109636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.688 [2024-10-11 09:44:39.112036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.688 [2024-10-11 09:44:39.112116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:54.688 pt3 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.688 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 [2024-10-11 09:44:39.121506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:54.688 [2024-10-11 09:44:39.123507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:54.688 [2024-10-11 09:44:39.123629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:54.688 [2024-10-11 09:44:39.123855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:54.688 [2024-10-11 09:44:39.123913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:54.689 [2024-10-11 09:44:39.124228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:54.689 [2024-10-11 09:44:39.124474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:54.689 [2024-10-11 09:44:39.124522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:54.689 [2024-10-11 09:44:39.124771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.689 "name": "raid_bdev1", 00:10:54.689 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:54.689 "strip_size_kb": 64, 00:10:54.689 "state": "online", 00:10:54.689 "raid_level": "concat", 00:10:54.689 "superblock": true, 00:10:54.689 "num_base_bdevs": 3, 00:10:54.689 "num_base_bdevs_discovered": 3, 00:10:54.689 "num_base_bdevs_operational": 3, 00:10:54.689 "base_bdevs_list": [ 00:10:54.689 { 00:10:54.689 "name": "pt1", 00:10:54.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.689 "is_configured": true, 00:10:54.689 "data_offset": 2048, 00:10:54.689 "data_size": 63488 00:10:54.689 }, 00:10:54.689 { 00:10:54.689 "name": "pt2", 00:10:54.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.689 "is_configured": true, 00:10:54.689 "data_offset": 2048, 00:10:54.689 "data_size": 63488 00:10:54.689 }, 00:10:54.689 { 00:10:54.689 "name": "pt3", 00:10:54.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.689 "is_configured": true, 00:10:54.689 "data_offset": 2048, 00:10:54.689 "data_size": 63488 00:10:54.689 } 00:10:54.689 ] 00:10:54.689 }' 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.689 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.948 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.948 [2024-10-11 09:44:39.557110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.208 "name": "raid_bdev1", 00:10:55.208 "aliases": [ 00:10:55.208 "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9" 00:10:55.208 ], 00:10:55.208 "product_name": "Raid Volume", 00:10:55.208 "block_size": 512, 00:10:55.208 "num_blocks": 190464, 00:10:55.208 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:55.208 "assigned_rate_limits": { 00:10:55.208 "rw_ios_per_sec": 0, 00:10:55.208 "rw_mbytes_per_sec": 0, 00:10:55.208 "r_mbytes_per_sec": 0, 00:10:55.208 "w_mbytes_per_sec": 0 00:10:55.208 }, 00:10:55.208 "claimed": false, 00:10:55.208 "zoned": false, 00:10:55.208 "supported_io_types": { 00:10:55.208 "read": true, 00:10:55.208 "write": true, 00:10:55.208 "unmap": true, 00:10:55.208 "flush": true, 00:10:55.208 "reset": true, 00:10:55.208 "nvme_admin": false, 00:10:55.208 "nvme_io": false, 00:10:55.208 "nvme_io_md": false, 00:10:55.208 "write_zeroes": true, 00:10:55.208 "zcopy": false, 00:10:55.208 "get_zone_info": false, 00:10:55.208 "zone_management": false, 00:10:55.208 "zone_append": false, 00:10:55.208 "compare": false, 00:10:55.208 "compare_and_write": false, 00:10:55.208 "abort": false, 00:10:55.208 "seek_hole": false, 00:10:55.208 "seek_data": false, 00:10:55.208 "copy": false, 00:10:55.208 "nvme_iov_md": false 00:10:55.208 }, 00:10:55.208 "memory_domains": [ 00:10:55.208 { 00:10:55.208 "dma_device_id": "system", 00:10:55.208 "dma_device_type": 1 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.208 "dma_device_type": 2 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "dma_device_id": "system", 00:10:55.208 "dma_device_type": 1 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.208 "dma_device_type": 2 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "dma_device_id": "system", 00:10:55.208 "dma_device_type": 1 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.208 "dma_device_type": 2 00:10:55.208 } 00:10:55.208 ], 00:10:55.208 "driver_specific": { 00:10:55.208 "raid": { 00:10:55.208 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:55.208 "strip_size_kb": 64, 00:10:55.208 "state": "online", 00:10:55.208 "raid_level": "concat", 00:10:55.208 "superblock": true, 00:10:55.208 "num_base_bdevs": 3, 00:10:55.208 "num_base_bdevs_discovered": 3, 00:10:55.208 "num_base_bdevs_operational": 3, 00:10:55.208 "base_bdevs_list": [ 00:10:55.208 { 00:10:55.208 "name": "pt1", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.208 "is_configured": true, 00:10:55.208 "data_offset": 2048, 00:10:55.208 "data_size": 63488 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "name": "pt2", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.208 "is_configured": true, 00:10:55.208 "data_offset": 2048, 00:10:55.208 "data_size": 63488 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "name": "pt3", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.208 "is_configured": true, 00:10:55.208 "data_offset": 2048, 00:10:55.208 "data_size": 63488 00:10:55.208 } 00:10:55.208 ] 00:10:55.208 } 00:10:55.208 } 00:10:55.208 }' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:55.208 pt2 00:10:55.208 pt3' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.208 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.209 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.209 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.209 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.209 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 [2024-10-11 09:44:39.848570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b85db4ac-8ce9-4ff4-97c1-fa142ea420e9 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b85db4ac-8ce9-4ff4-97c1-fa142ea420e9 ']' 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 [2024-10-11 09:44:39.892160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.469 [2024-10-11 09:44:39.892246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.469 [2024-10-11 09:44:39.892374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.469 [2024-10-11 09:44:39.892469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.469 [2024-10-11 09:44:39.892508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 [2024-10-11 09:44:40.039982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:55.469 [2024-10-11 09:44:40.042209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:55.469 [2024-10-11 09:44:40.042320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:55.469 [2024-10-11 09:44:40.042407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:55.469 [2024-10-11 09:44:40.042517] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:55.469 [2024-10-11 09:44:40.042582] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:55.469 [2024-10-11 09:44:40.042642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.469 [2024-10-11 09:44:40.042682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:55.469 request: 00:10:55.469 { 00:10:55.469 "name": "raid_bdev1", 00:10:55.469 "raid_level": "concat", 00:10:55.469 "base_bdevs": [ 00:10:55.469 "malloc1", 00:10:55.469 "malloc2", 00:10:55.469 "malloc3" 00:10:55.469 ], 00:10:55.469 "strip_size_kb": 64, 00:10:55.469 "superblock": false, 00:10:55.469 "method": "bdev_raid_create", 00:10:55.469 "req_id": 1 00:10:55.469 } 00:10:55.469 Got JSON-RPC error response 00:10:55.469 response: 00:10:55.469 { 00:10:55.469 "code": -17, 00:10:55.469 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:55.469 } 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 [2024-10-11 09:44:40.107925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:55.729 [2024-10-11 09:44:40.108058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.729 [2024-10-11 09:44:40.108109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:55.729 [2024-10-11 09:44:40.108146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.729 [2024-10-11 09:44:40.110625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.729 [2024-10-11 09:44:40.110705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:55.729 [2024-10-11 09:44:40.110867] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:55.729 [2024-10-11 09:44:40.110978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.729 pt1 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.729 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.729 "name": "raid_bdev1", 00:10:55.729 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:55.729 "strip_size_kb": 64, 00:10:55.729 "state": "configuring", 00:10:55.729 "raid_level": "concat", 00:10:55.729 "superblock": true, 00:10:55.729 "num_base_bdevs": 3, 00:10:55.729 "num_base_bdevs_discovered": 1, 00:10:55.729 "num_base_bdevs_operational": 3, 00:10:55.729 "base_bdevs_list": [ 00:10:55.729 { 00:10:55.729 "name": "pt1", 00:10:55.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.729 "is_configured": true, 00:10:55.729 "data_offset": 2048, 00:10:55.729 "data_size": 63488 00:10:55.729 }, 00:10:55.729 { 00:10:55.729 "name": null, 00:10:55.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.729 "is_configured": false, 00:10:55.729 "data_offset": 2048, 00:10:55.729 "data_size": 63488 00:10:55.729 }, 00:10:55.729 { 00:10:55.729 "name": null, 00:10:55.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.729 "is_configured": false, 00:10:55.729 "data_offset": 2048, 00:10:55.730 "data_size": 63488 00:10:55.730 } 00:10:55.730 ] 00:10:55.730 }' 00:10:55.730 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.730 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.989 [2024-10-11 09:44:40.579090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:55.989 [2024-10-11 09:44:40.579160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.989 [2024-10-11 09:44:40.579185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:55.989 [2024-10-11 09:44:40.579195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.989 [2024-10-11 09:44:40.579651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.989 [2024-10-11 09:44:40.579668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:55.989 [2024-10-11 09:44:40.579806] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:55.989 [2024-10-11 09:44:40.579833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.989 pt2 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.989 [2024-10-11 09:44:40.591052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.989 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.990 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.990 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.990 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.990 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.990 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.249 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.249 "name": "raid_bdev1", 00:10:56.249 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:56.249 "strip_size_kb": 64, 00:10:56.249 "state": "configuring", 00:10:56.249 "raid_level": "concat", 00:10:56.249 "superblock": true, 00:10:56.249 "num_base_bdevs": 3, 00:10:56.249 "num_base_bdevs_discovered": 1, 00:10:56.249 "num_base_bdevs_operational": 3, 00:10:56.249 "base_bdevs_list": [ 00:10:56.249 { 00:10:56.249 "name": "pt1", 00:10:56.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.249 "is_configured": true, 00:10:56.249 "data_offset": 2048, 00:10:56.249 "data_size": 63488 00:10:56.249 }, 00:10:56.249 { 00:10:56.249 "name": null, 00:10:56.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.249 "is_configured": false, 00:10:56.249 "data_offset": 0, 00:10:56.249 "data_size": 63488 00:10:56.249 }, 00:10:56.249 { 00:10:56.249 "name": null, 00:10:56.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.249 "is_configured": false, 00:10:56.249 "data_offset": 2048, 00:10:56.249 "data_size": 63488 00:10:56.249 } 00:10:56.249 ] 00:10:56.249 }' 00:10:56.249 09:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.249 09:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.508 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:56.508 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.508 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.509 [2024-10-11 09:44:41.038286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.509 [2024-10-11 09:44:41.038405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.509 [2024-10-11 09:44:41.038450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:56.509 [2024-10-11 09:44:41.038482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.509 [2024-10-11 09:44:41.038973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.509 [2024-10-11 09:44:41.039033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.509 [2024-10-11 09:44:41.039151] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:56.509 [2024-10-11 09:44:41.039212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.509 pt2 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.509 [2024-10-11 09:44:41.050266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.509 [2024-10-11 09:44:41.050358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.509 [2024-10-11 09:44:41.050418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:56.509 [2024-10-11 09:44:41.050453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.509 [2024-10-11 09:44:41.050926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.509 [2024-10-11 09:44:41.050991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.509 [2024-10-11 09:44:41.051089] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:56.509 [2024-10-11 09:44:41.051142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.509 [2024-10-11 09:44:41.051309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.509 [2024-10-11 09:44:41.051353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:56.509 [2024-10-11 09:44:41.051649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.509 [2024-10-11 09:44:41.051917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.509 [2024-10-11 09:44:41.051968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:56.509 [2024-10-11 09:44:41.052166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.509 pt3 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.509 "name": "raid_bdev1", 00:10:56.509 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:56.509 "strip_size_kb": 64, 00:10:56.509 "state": "online", 00:10:56.509 "raid_level": "concat", 00:10:56.509 "superblock": true, 00:10:56.509 "num_base_bdevs": 3, 00:10:56.509 "num_base_bdevs_discovered": 3, 00:10:56.509 "num_base_bdevs_operational": 3, 00:10:56.509 "base_bdevs_list": [ 00:10:56.509 { 00:10:56.509 "name": "pt1", 00:10:56.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.509 "is_configured": true, 00:10:56.509 "data_offset": 2048, 00:10:56.509 "data_size": 63488 00:10:56.509 }, 00:10:56.509 { 00:10:56.509 "name": "pt2", 00:10:56.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.509 "is_configured": true, 00:10:56.509 "data_offset": 2048, 00:10:56.509 "data_size": 63488 00:10:56.509 }, 00:10:56.509 { 00:10:56.509 "name": "pt3", 00:10:56.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.509 "is_configured": true, 00:10:56.509 "data_offset": 2048, 00:10:56.509 "data_size": 63488 00:10:56.509 } 00:10:56.509 ] 00:10:56.509 }' 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.509 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.076 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.077 [2024-10-11 09:44:41.517877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.077 "name": "raid_bdev1", 00:10:57.077 "aliases": [ 00:10:57.077 "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9" 00:10:57.077 ], 00:10:57.077 "product_name": "Raid Volume", 00:10:57.077 "block_size": 512, 00:10:57.077 "num_blocks": 190464, 00:10:57.077 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:57.077 "assigned_rate_limits": { 00:10:57.077 "rw_ios_per_sec": 0, 00:10:57.077 "rw_mbytes_per_sec": 0, 00:10:57.077 "r_mbytes_per_sec": 0, 00:10:57.077 "w_mbytes_per_sec": 0 00:10:57.077 }, 00:10:57.077 "claimed": false, 00:10:57.077 "zoned": false, 00:10:57.077 "supported_io_types": { 00:10:57.077 "read": true, 00:10:57.077 "write": true, 00:10:57.077 "unmap": true, 00:10:57.077 "flush": true, 00:10:57.077 "reset": true, 00:10:57.077 "nvme_admin": false, 00:10:57.077 "nvme_io": false, 00:10:57.077 "nvme_io_md": false, 00:10:57.077 "write_zeroes": true, 00:10:57.077 "zcopy": false, 00:10:57.077 "get_zone_info": false, 00:10:57.077 "zone_management": false, 00:10:57.077 "zone_append": false, 00:10:57.077 "compare": false, 00:10:57.077 "compare_and_write": false, 00:10:57.077 "abort": false, 00:10:57.077 "seek_hole": false, 00:10:57.077 "seek_data": false, 00:10:57.077 "copy": false, 00:10:57.077 "nvme_iov_md": false 00:10:57.077 }, 00:10:57.077 "memory_domains": [ 00:10:57.077 { 00:10:57.077 "dma_device_id": "system", 00:10:57.077 "dma_device_type": 1 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.077 "dma_device_type": 2 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "dma_device_id": "system", 00:10:57.077 "dma_device_type": 1 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.077 "dma_device_type": 2 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "dma_device_id": "system", 00:10:57.077 "dma_device_type": 1 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.077 "dma_device_type": 2 00:10:57.077 } 00:10:57.077 ], 00:10:57.077 "driver_specific": { 00:10:57.077 "raid": { 00:10:57.077 "uuid": "b85db4ac-8ce9-4ff4-97c1-fa142ea420e9", 00:10:57.077 "strip_size_kb": 64, 00:10:57.077 "state": "online", 00:10:57.077 "raid_level": "concat", 00:10:57.077 "superblock": true, 00:10:57.077 "num_base_bdevs": 3, 00:10:57.077 "num_base_bdevs_discovered": 3, 00:10:57.077 "num_base_bdevs_operational": 3, 00:10:57.077 "base_bdevs_list": [ 00:10:57.077 { 00:10:57.077 "name": "pt1", 00:10:57.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.077 "is_configured": true, 00:10:57.077 "data_offset": 2048, 00:10:57.077 "data_size": 63488 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "name": "pt2", 00:10:57.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.077 "is_configured": true, 00:10:57.077 "data_offset": 2048, 00:10:57.077 "data_size": 63488 00:10:57.077 }, 00:10:57.077 { 00:10:57.077 "name": "pt3", 00:10:57.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.077 "is_configured": true, 00:10:57.077 "data_offset": 2048, 00:10:57.077 "data_size": 63488 00:10:57.077 } 00:10:57.077 ] 00:10:57.077 } 00:10:57.077 } 00:10:57.077 }' 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.077 pt2 00:10:57.077 pt3' 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.077 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:57.337 [2024-10-11 09:44:41.821331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b85db4ac-8ce9-4ff4-97c1-fa142ea420e9 '!=' b85db4ac-8ce9-4ff4-97c1-fa142ea420e9 ']' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67305 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 67305 ']' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 67305 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67305 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67305' 00:10:57.337 killing process with pid 67305 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 67305 00:10:57.337 [2024-10-11 09:44:41.875174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.337 [2024-10-11 09:44:41.875359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.337 09:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 67305 00:10:57.337 [2024-10-11 09:44:41.875460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.337 [2024-10-11 09:44:41.875515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:57.597 [2024-10-11 09:44:42.189534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.975 09:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:58.975 00:10:58.975 real 0m5.426s 00:10:58.975 user 0m7.758s 00:10:58.975 sys 0m0.902s 00:10:58.975 09:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.975 09:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.975 ************************************ 00:10:58.975 END TEST raid_superblock_test 00:10:58.975 ************************************ 00:10:58.975 09:44:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:58.975 09:44:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:58.975 09:44:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.975 09:44:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.975 ************************************ 00:10:58.975 START TEST raid_read_error_test 00:10:58.975 ************************************ 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.975 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HDLXolth6G 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67558 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67558 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67558 ']' 00:10:58.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.976 09:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.976 [2024-10-11 09:44:43.587661] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:58.976 [2024-10-11 09:44:43.587886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67558 ] 00:10:59.235 [2024-10-11 09:44:43.756124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.494 [2024-10-11 09:44:43.886630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.786 [2024-10-11 09:44:44.129895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.786 [2024-10-11 09:44:44.130010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.065 BaseBdev1_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.065 true 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.065 [2024-10-11 09:44:44.592182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:00.065 [2024-10-11 09:44:44.592319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.065 [2024-10-11 09:44:44.592367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:00.065 [2024-10-11 09:44:44.592405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.065 [2024-10-11 09:44:44.595023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.065 [2024-10-11 09:44:44.595115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.065 BaseBdev1 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.065 BaseBdev2_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.065 true 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.065 [2024-10-11 09:44:44.668522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.065 [2024-10-11 09:44:44.668667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.065 [2024-10-11 09:44:44.668696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:00.065 [2024-10-11 09:44:44.668709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.065 [2024-10-11 09:44:44.671152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.065 [2024-10-11 09:44:44.671192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.065 BaseBdev2 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.065 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.325 BaseBdev3_malloc 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.325 true 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.325 [2024-10-11 09:44:44.751762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:00.325 [2024-10-11 09:44:44.751822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.325 [2024-10-11 09:44:44.751844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:00.325 [2024-10-11 09:44:44.751857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.325 [2024-10-11 09:44:44.754276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.325 [2024-10-11 09:44:44.754317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:00.325 BaseBdev3 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.325 [2024-10-11 09:44:44.763816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.325 [2024-10-11 09:44:44.765815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.325 [2024-10-11 09:44:44.765905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.325 [2024-10-11 09:44:44.766134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.325 [2024-10-11 09:44:44.766153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.325 [2024-10-11 09:44:44.766449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:00.325 [2024-10-11 09:44:44.766622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.325 [2024-10-11 09:44:44.766634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:00.325 [2024-10-11 09:44:44.766825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.325 "name": "raid_bdev1", 00:11:00.325 "uuid": "2281cd1c-434c-425c-bb73-61ed3a340574", 00:11:00.325 "strip_size_kb": 64, 00:11:00.325 "state": "online", 00:11:00.325 "raid_level": "concat", 00:11:00.325 "superblock": true, 00:11:00.325 "num_base_bdevs": 3, 00:11:00.325 "num_base_bdevs_discovered": 3, 00:11:00.325 "num_base_bdevs_operational": 3, 00:11:00.325 "base_bdevs_list": [ 00:11:00.325 { 00:11:00.325 "name": "BaseBdev1", 00:11:00.325 "uuid": "9b1caf60-f8fc-5a18-b850-e843e929e5a0", 00:11:00.325 "is_configured": true, 00:11:00.325 "data_offset": 2048, 00:11:00.325 "data_size": 63488 00:11:00.325 }, 00:11:00.325 { 00:11:00.325 "name": "BaseBdev2", 00:11:00.325 "uuid": "798648c9-942f-5650-bd6a-8f64664fde93", 00:11:00.325 "is_configured": true, 00:11:00.325 "data_offset": 2048, 00:11:00.325 "data_size": 63488 00:11:00.325 }, 00:11:00.325 { 00:11:00.325 "name": "BaseBdev3", 00:11:00.325 "uuid": "b57d1993-042c-56f9-a61b-cd7c328deb77", 00:11:00.325 "is_configured": true, 00:11:00.325 "data_offset": 2048, 00:11:00.325 "data_size": 63488 00:11:00.325 } 00:11:00.325 ] 00:11:00.325 }' 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.325 09:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.893 09:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:00.894 09:44:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:00.894 [2024-10-11 09:44:45.356487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.835 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.836 "name": "raid_bdev1", 00:11:01.836 "uuid": "2281cd1c-434c-425c-bb73-61ed3a340574", 00:11:01.836 "strip_size_kb": 64, 00:11:01.836 "state": "online", 00:11:01.836 "raid_level": "concat", 00:11:01.836 "superblock": true, 00:11:01.836 "num_base_bdevs": 3, 00:11:01.836 "num_base_bdevs_discovered": 3, 00:11:01.836 "num_base_bdevs_operational": 3, 00:11:01.836 "base_bdevs_list": [ 00:11:01.836 { 00:11:01.836 "name": "BaseBdev1", 00:11:01.836 "uuid": "9b1caf60-f8fc-5a18-b850-e843e929e5a0", 00:11:01.836 "is_configured": true, 00:11:01.836 "data_offset": 2048, 00:11:01.836 "data_size": 63488 00:11:01.836 }, 00:11:01.836 { 00:11:01.836 "name": "BaseBdev2", 00:11:01.836 "uuid": "798648c9-942f-5650-bd6a-8f64664fde93", 00:11:01.836 "is_configured": true, 00:11:01.836 "data_offset": 2048, 00:11:01.836 "data_size": 63488 00:11:01.836 }, 00:11:01.836 { 00:11:01.836 "name": "BaseBdev3", 00:11:01.836 "uuid": "b57d1993-042c-56f9-a61b-cd7c328deb77", 00:11:01.836 "is_configured": true, 00:11:01.836 "data_offset": 2048, 00:11:01.836 "data_size": 63488 00:11:01.836 } 00:11:01.836 ] 00:11:01.836 }' 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.836 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.094 [2024-10-11 09:44:46.697537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.094 [2024-10-11 09:44:46.697573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.094 [2024-10-11 09:44:46.700472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.094 [2024-10-11 09:44:46.700571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.094 [2024-10-11 09:44:46.700622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.094 [2024-10-11 09:44:46.700635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:02.094 { 00:11:02.094 "results": [ 00:11:02.094 { 00:11:02.094 "job": "raid_bdev1", 00:11:02.094 "core_mask": "0x1", 00:11:02.094 "workload": "randrw", 00:11:02.094 "percentage": 50, 00:11:02.094 "status": "finished", 00:11:02.094 "queue_depth": 1, 00:11:02.094 "io_size": 131072, 00:11:02.094 "runtime": 1.341306, 00:11:02.094 "iops": 14014.699106691538, 00:11:02.094 "mibps": 1751.8373883364422, 00:11:02.094 "io_failed": 1, 00:11:02.094 "io_timeout": 0, 00:11:02.094 "avg_latency_us": 98.94226409422967, 00:11:02.094 "min_latency_us": 27.50043668122271, 00:11:02.094 "max_latency_us": 1688.482096069869 00:11:02.094 } 00:11:02.094 ], 00:11:02.094 "core_count": 1 00:11:02.094 } 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67558 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67558 ']' 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67558 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.094 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67558 00:11:02.353 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.353 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.353 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67558' 00:11:02.353 killing process with pid 67558 00:11:02.353 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67558 00:11:02.353 [2024-10-11 09:44:46.737062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.353 09:44:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67558 00:11:02.353 [2024-10-11 09:44:46.968832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HDLXolth6G 00:11:03.730 ************************************ 00:11:03.730 END TEST raid_read_error_test 00:11:03.730 ************************************ 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:03.730 00:11:03.730 real 0m4.793s 00:11:03.730 user 0m5.752s 00:11:03.730 sys 0m0.610s 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.730 09:44:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.730 09:44:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:03.730 09:44:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:03.730 09:44:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.730 09:44:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.730 ************************************ 00:11:03.730 START TEST raid_write_error_test 00:11:03.730 ************************************ 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cID1JHiAPj 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67704 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67704 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67704 ']' 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.730 09:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.989 [2024-10-11 09:44:48.426974] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:03.989 [2024-10-11 09:44:48.427212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67704 ] 00:11:03.989 [2024-10-11 09:44:48.591519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.247 [2024-10-11 09:44:48.725839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.506 [2024-10-11 09:44:48.966553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.506 [2024-10-11 09:44:48.966709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 BaseBdev1_malloc 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 true 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 [2024-10-11 09:44:49.345278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.765 [2024-10-11 09:44:49.345339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.765 [2024-10-11 09:44:49.345362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.765 [2024-10-11 09:44:49.345374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.765 [2024-10-11 09:44:49.347763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.765 [2024-10-11 09:44:49.347804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.765 BaseBdev1 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.765 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.024 BaseBdev2_malloc 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.024 true 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.024 [2024-10-11 09:44:49.419001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:05.024 [2024-10-11 09:44:49.419060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.024 [2024-10-11 09:44:49.419095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:05.024 [2024-10-11 09:44:49.419108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.024 [2024-10-11 09:44:49.421577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.024 [2024-10-11 09:44:49.421666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.024 BaseBdev2 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.024 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 BaseBdev3_malloc 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 true 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 [2024-10-11 09:44:49.506017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:05.025 [2024-10-11 09:44:49.506078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.025 [2024-10-11 09:44:49.506100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:05.025 [2024-10-11 09:44:49.506112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.025 [2024-10-11 09:44:49.508610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.025 [2024-10-11 09:44:49.508656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:05.025 BaseBdev3 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 [2024-10-11 09:44:49.518011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.025 [2024-10-11 09:44:49.519971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.025 [2024-10-11 09:44:49.520068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.025 [2024-10-11 09:44:49.520314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.025 [2024-10-11 09:44:49.520327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.025 [2024-10-11 09:44:49.520625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:05.025 [2024-10-11 09:44:49.520832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.025 [2024-10-11 09:44:49.520846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:05.025 [2024-10-11 09:44:49.521031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.025 "name": "raid_bdev1", 00:11:05.025 "uuid": "3c6d0742-1d5a-41c4-ac9b-ae92db49c71a", 00:11:05.025 "strip_size_kb": 64, 00:11:05.025 "state": "online", 00:11:05.025 "raid_level": "concat", 00:11:05.025 "superblock": true, 00:11:05.025 "num_base_bdevs": 3, 00:11:05.025 "num_base_bdevs_discovered": 3, 00:11:05.025 "num_base_bdevs_operational": 3, 00:11:05.025 "base_bdevs_list": [ 00:11:05.025 { 00:11:05.025 "name": "BaseBdev1", 00:11:05.025 "uuid": "e14b1e78-0f3d-5941-a78c-86d6ca6d08f1", 00:11:05.025 "is_configured": true, 00:11:05.025 "data_offset": 2048, 00:11:05.025 "data_size": 63488 00:11:05.025 }, 00:11:05.025 { 00:11:05.025 "name": "BaseBdev2", 00:11:05.025 "uuid": "a46c075e-5639-5dc8-8c82-c32c9c7f26ed", 00:11:05.025 "is_configured": true, 00:11:05.025 "data_offset": 2048, 00:11:05.025 "data_size": 63488 00:11:05.025 }, 00:11:05.025 { 00:11:05.025 "name": "BaseBdev3", 00:11:05.025 "uuid": "b2fd1df2-c513-56a9-905a-a1ee1fe6ede3", 00:11:05.025 "is_configured": true, 00:11:05.025 "data_offset": 2048, 00:11:05.025 "data_size": 63488 00:11:05.025 } 00:11:05.025 ] 00:11:05.025 }' 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.025 09:44:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.597 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:05.597 09:44:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.597 [2024-10-11 09:44:50.106685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:06.536 09:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:06.536 09:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.536 09:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.536 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.536 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.537 "name": "raid_bdev1", 00:11:06.537 "uuid": "3c6d0742-1d5a-41c4-ac9b-ae92db49c71a", 00:11:06.537 "strip_size_kb": 64, 00:11:06.537 "state": "online", 00:11:06.537 "raid_level": "concat", 00:11:06.537 "superblock": true, 00:11:06.537 "num_base_bdevs": 3, 00:11:06.537 "num_base_bdevs_discovered": 3, 00:11:06.537 "num_base_bdevs_operational": 3, 00:11:06.537 "base_bdevs_list": [ 00:11:06.537 { 00:11:06.537 "name": "BaseBdev1", 00:11:06.537 "uuid": "e14b1e78-0f3d-5941-a78c-86d6ca6d08f1", 00:11:06.537 "is_configured": true, 00:11:06.537 "data_offset": 2048, 00:11:06.537 "data_size": 63488 00:11:06.537 }, 00:11:06.537 { 00:11:06.537 "name": "BaseBdev2", 00:11:06.537 "uuid": "a46c075e-5639-5dc8-8c82-c32c9c7f26ed", 00:11:06.537 "is_configured": true, 00:11:06.537 "data_offset": 2048, 00:11:06.537 "data_size": 63488 00:11:06.537 }, 00:11:06.537 { 00:11:06.537 "name": "BaseBdev3", 00:11:06.537 "uuid": "b2fd1df2-c513-56a9-905a-a1ee1fe6ede3", 00:11:06.537 "is_configured": true, 00:11:06.537 "data_offset": 2048, 00:11:06.537 "data_size": 63488 00:11:06.537 } 00:11:06.537 ] 00:11:06.537 }' 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.537 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.103 [2024-10-11 09:44:51.519578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.103 [2024-10-11 09:44:51.519665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.103 [2024-10-11 09:44:51.522686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.103 [2024-10-11 09:44:51.522816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.103 [2024-10-11 09:44:51.522895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.103 [2024-10-11 09:44:51.522954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67704 00:11:07.103 { 00:11:07.103 "results": [ 00:11:07.103 { 00:11:07.103 "job": "raid_bdev1", 00:11:07.103 "core_mask": "0x1", 00:11:07.103 "workload": "randrw", 00:11:07.103 "percentage": 50, 00:11:07.103 "status": "finished", 00:11:07.103 "queue_depth": 1, 00:11:07.103 "io_size": 131072, 00:11:07.103 "runtime": 1.413605, 00:11:07.103 "iops": 13805.836849756473, 00:11:07.103 "mibps": 1725.7296062195592, 00:11:07.103 "io_failed": 1, 00:11:07.103 "io_timeout": 0, 00:11:07.103 "avg_latency_us": 100.36673167922355, 00:11:07.103 "min_latency_us": 28.50655021834061, 00:11:07.103 "max_latency_us": 1645.5545851528384 00:11:07.103 } 00:11:07.103 ], 00:11:07.103 "core_count": 1 00:11:07.103 } 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67704 ']' 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67704 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67704 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67704' 00:11:07.103 killing process with pid 67704 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67704 00:11:07.103 09:44:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67704 00:11:07.103 [2024-10-11 09:44:51.566054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.361 [2024-10-11 09:44:51.813853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cID1JHiAPj 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:08.761 ************************************ 00:11:08.761 END TEST raid_write_error_test 00:11:08.761 ************************************ 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:08.761 00:11:08.761 real 0m4.803s 00:11:08.761 user 0m5.770s 00:11:08.761 sys 0m0.584s 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.761 09:44:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.761 09:44:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:08.761 09:44:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:08.761 09:44:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:08.761 09:44:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.761 09:44:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.761 ************************************ 00:11:08.761 START TEST raid_state_function_test 00:11:08.761 ************************************ 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67848 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:08.761 Process raid pid: 67848 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67848' 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67848 00:11:08.761 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67848 ']' 00:11:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.762 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.762 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.762 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.762 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.762 09:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.762 [2024-10-11 09:44:53.277838] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:08.762 [2024-10-11 09:44:53.277980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.021 [2024-10-11 09:44:53.446864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.021 [2024-10-11 09:44:53.585281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.280 [2024-10-11 09:44:53.826394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.280 [2024-10-11 09:44:53.826442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.539 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.539 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:09.539 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.539 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.539 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.539 [2024-10-11 09:44:54.169360] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.539 [2024-10-11 09:44:54.169422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.539 [2024-10-11 09:44:54.169434] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.539 [2024-10-11 09:44:54.169446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.539 [2024-10-11 09:44:54.169455] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.539 [2024-10-11 09:44:54.169466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.797 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.797 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:09.797 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.798 "name": "Existed_Raid", 00:11:09.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.798 "strip_size_kb": 0, 00:11:09.798 "state": "configuring", 00:11:09.798 "raid_level": "raid1", 00:11:09.798 "superblock": false, 00:11:09.798 "num_base_bdevs": 3, 00:11:09.798 "num_base_bdevs_discovered": 0, 00:11:09.798 "num_base_bdevs_operational": 3, 00:11:09.798 "base_bdevs_list": [ 00:11:09.798 { 00:11:09.798 "name": "BaseBdev1", 00:11:09.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.798 "is_configured": false, 00:11:09.798 "data_offset": 0, 00:11:09.798 "data_size": 0 00:11:09.798 }, 00:11:09.798 { 00:11:09.798 "name": "BaseBdev2", 00:11:09.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.798 "is_configured": false, 00:11:09.798 "data_offset": 0, 00:11:09.798 "data_size": 0 00:11:09.798 }, 00:11:09.798 { 00:11:09.798 "name": "BaseBdev3", 00:11:09.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.798 "is_configured": false, 00:11:09.798 "data_offset": 0, 00:11:09.798 "data_size": 0 00:11:09.798 } 00:11:09.798 ] 00:11:09.798 }' 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.798 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.056 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.056 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.056 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.056 [2024-10-11 09:44:54.652513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.056 [2024-10-11 09:44:54.652621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:10.056 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.056 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:10.057 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.057 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.057 [2024-10-11 09:44:54.660505] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.057 [2024-10-11 09:44:54.660606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.057 [2024-10-11 09:44:54.660663] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.057 [2024-10-11 09:44:54.660709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.057 [2024-10-11 09:44:54.660768] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.057 [2024-10-11 09:44:54.660817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.057 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.057 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.057 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.057 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.316 [2024-10-11 09:44:54.710659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.316 BaseBdev1 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.316 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.316 [ 00:11:10.316 { 00:11:10.316 "name": "BaseBdev1", 00:11:10.316 "aliases": [ 00:11:10.316 "d17b3aa0-58c3-4882-aa09-5b0a9e512b41" 00:11:10.316 ], 00:11:10.316 "product_name": "Malloc disk", 00:11:10.316 "block_size": 512, 00:11:10.316 "num_blocks": 65536, 00:11:10.316 "uuid": "d17b3aa0-58c3-4882-aa09-5b0a9e512b41", 00:11:10.316 "assigned_rate_limits": { 00:11:10.316 "rw_ios_per_sec": 0, 00:11:10.316 "rw_mbytes_per_sec": 0, 00:11:10.316 "r_mbytes_per_sec": 0, 00:11:10.316 "w_mbytes_per_sec": 0 00:11:10.316 }, 00:11:10.316 "claimed": true, 00:11:10.316 "claim_type": "exclusive_write", 00:11:10.316 "zoned": false, 00:11:10.316 "supported_io_types": { 00:11:10.316 "read": true, 00:11:10.316 "write": true, 00:11:10.316 "unmap": true, 00:11:10.316 "flush": true, 00:11:10.316 "reset": true, 00:11:10.316 "nvme_admin": false, 00:11:10.316 "nvme_io": false, 00:11:10.316 "nvme_io_md": false, 00:11:10.316 "write_zeroes": true, 00:11:10.317 "zcopy": true, 00:11:10.317 "get_zone_info": false, 00:11:10.317 "zone_management": false, 00:11:10.317 "zone_append": false, 00:11:10.317 "compare": false, 00:11:10.317 "compare_and_write": false, 00:11:10.317 "abort": true, 00:11:10.317 "seek_hole": false, 00:11:10.317 "seek_data": false, 00:11:10.317 "copy": true, 00:11:10.317 "nvme_iov_md": false 00:11:10.317 }, 00:11:10.317 "memory_domains": [ 00:11:10.317 { 00:11:10.317 "dma_device_id": "system", 00:11:10.317 "dma_device_type": 1 00:11:10.317 }, 00:11:10.317 { 00:11:10.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.317 "dma_device_type": 2 00:11:10.317 } 00:11:10.317 ], 00:11:10.317 "driver_specific": {} 00:11:10.317 } 00:11:10.317 ] 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.317 "name": "Existed_Raid", 00:11:10.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.317 "strip_size_kb": 0, 00:11:10.317 "state": "configuring", 00:11:10.317 "raid_level": "raid1", 00:11:10.317 "superblock": false, 00:11:10.317 "num_base_bdevs": 3, 00:11:10.317 "num_base_bdevs_discovered": 1, 00:11:10.317 "num_base_bdevs_operational": 3, 00:11:10.317 "base_bdevs_list": [ 00:11:10.317 { 00:11:10.317 "name": "BaseBdev1", 00:11:10.317 "uuid": "d17b3aa0-58c3-4882-aa09-5b0a9e512b41", 00:11:10.317 "is_configured": true, 00:11:10.317 "data_offset": 0, 00:11:10.317 "data_size": 65536 00:11:10.317 }, 00:11:10.317 { 00:11:10.317 "name": "BaseBdev2", 00:11:10.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.317 "is_configured": false, 00:11:10.317 "data_offset": 0, 00:11:10.317 "data_size": 0 00:11:10.317 }, 00:11:10.317 { 00:11:10.317 "name": "BaseBdev3", 00:11:10.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.317 "is_configured": false, 00:11:10.317 "data_offset": 0, 00:11:10.317 "data_size": 0 00:11:10.317 } 00:11:10.317 ] 00:11:10.317 }' 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.317 09:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.885 [2024-10-11 09:44:55.225897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.885 [2024-10-11 09:44:55.225958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.885 [2024-10-11 09:44:55.233935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.885 [2024-10-11 09:44:55.236000] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.885 [2024-10-11 09:44:55.236044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.885 [2024-10-11 09:44:55.236056] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.885 [2024-10-11 09:44:55.236067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.885 "name": "Existed_Raid", 00:11:10.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.885 "strip_size_kb": 0, 00:11:10.885 "state": "configuring", 00:11:10.885 "raid_level": "raid1", 00:11:10.885 "superblock": false, 00:11:10.885 "num_base_bdevs": 3, 00:11:10.885 "num_base_bdevs_discovered": 1, 00:11:10.885 "num_base_bdevs_operational": 3, 00:11:10.885 "base_bdevs_list": [ 00:11:10.885 { 00:11:10.885 "name": "BaseBdev1", 00:11:10.885 "uuid": "d17b3aa0-58c3-4882-aa09-5b0a9e512b41", 00:11:10.885 "is_configured": true, 00:11:10.885 "data_offset": 0, 00:11:10.885 "data_size": 65536 00:11:10.885 }, 00:11:10.885 { 00:11:10.885 "name": "BaseBdev2", 00:11:10.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.885 "is_configured": false, 00:11:10.885 "data_offset": 0, 00:11:10.885 "data_size": 0 00:11:10.885 }, 00:11:10.885 { 00:11:10.885 "name": "BaseBdev3", 00:11:10.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.885 "is_configured": false, 00:11:10.885 "data_offset": 0, 00:11:10.885 "data_size": 0 00:11:10.885 } 00:11:10.885 ] 00:11:10.885 }' 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.885 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.144 [2024-10-11 09:44:55.754514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.144 BaseBdev2 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:11.144 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.145 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.406 [ 00:11:11.406 { 00:11:11.406 "name": "BaseBdev2", 00:11:11.406 "aliases": [ 00:11:11.406 "56ff9f10-e532-4f2a-ab99-b5be85e1cace" 00:11:11.406 ], 00:11:11.406 "product_name": "Malloc disk", 00:11:11.406 "block_size": 512, 00:11:11.406 "num_blocks": 65536, 00:11:11.406 "uuid": "56ff9f10-e532-4f2a-ab99-b5be85e1cace", 00:11:11.406 "assigned_rate_limits": { 00:11:11.406 "rw_ios_per_sec": 0, 00:11:11.406 "rw_mbytes_per_sec": 0, 00:11:11.406 "r_mbytes_per_sec": 0, 00:11:11.406 "w_mbytes_per_sec": 0 00:11:11.406 }, 00:11:11.406 "claimed": true, 00:11:11.406 "claim_type": "exclusive_write", 00:11:11.406 "zoned": false, 00:11:11.406 "supported_io_types": { 00:11:11.406 "read": true, 00:11:11.406 "write": true, 00:11:11.406 "unmap": true, 00:11:11.407 "flush": true, 00:11:11.407 "reset": true, 00:11:11.407 "nvme_admin": false, 00:11:11.407 "nvme_io": false, 00:11:11.407 "nvme_io_md": false, 00:11:11.407 "write_zeroes": true, 00:11:11.407 "zcopy": true, 00:11:11.407 "get_zone_info": false, 00:11:11.407 "zone_management": false, 00:11:11.407 "zone_append": false, 00:11:11.407 "compare": false, 00:11:11.407 "compare_and_write": false, 00:11:11.407 "abort": true, 00:11:11.407 "seek_hole": false, 00:11:11.407 "seek_data": false, 00:11:11.407 "copy": true, 00:11:11.407 "nvme_iov_md": false 00:11:11.407 }, 00:11:11.407 "memory_domains": [ 00:11:11.407 { 00:11:11.407 "dma_device_id": "system", 00:11:11.407 "dma_device_type": 1 00:11:11.407 }, 00:11:11.407 { 00:11:11.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.407 "dma_device_type": 2 00:11:11.407 } 00:11:11.407 ], 00:11:11.407 "driver_specific": {} 00:11:11.407 } 00:11:11.407 ] 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.407 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.408 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.408 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.408 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.408 "name": "Existed_Raid", 00:11:11.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.408 "strip_size_kb": 0, 00:11:11.408 "state": "configuring", 00:11:11.408 "raid_level": "raid1", 00:11:11.408 "superblock": false, 00:11:11.408 "num_base_bdevs": 3, 00:11:11.408 "num_base_bdevs_discovered": 2, 00:11:11.408 "num_base_bdevs_operational": 3, 00:11:11.408 "base_bdevs_list": [ 00:11:11.408 { 00:11:11.408 "name": "BaseBdev1", 00:11:11.408 "uuid": "d17b3aa0-58c3-4882-aa09-5b0a9e512b41", 00:11:11.408 "is_configured": true, 00:11:11.408 "data_offset": 0, 00:11:11.408 "data_size": 65536 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "name": "BaseBdev2", 00:11:11.408 "uuid": "56ff9f10-e532-4f2a-ab99-b5be85e1cace", 00:11:11.408 "is_configured": true, 00:11:11.408 "data_offset": 0, 00:11:11.408 "data_size": 65536 00:11:11.408 }, 00:11:11.408 { 00:11:11.408 "name": "BaseBdev3", 00:11:11.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.408 "is_configured": false, 00:11:11.408 "data_offset": 0, 00:11:11.408 "data_size": 0 00:11:11.408 } 00:11:11.408 ] 00:11:11.408 }' 00:11:11.408 09:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.408 09:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.669 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:11.669 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.669 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.928 [2024-10-11 09:44:56.343283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.928 [2024-10-11 09:44:56.343468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:11.928 [2024-10-11 09:44:56.343505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:11.928 [2024-10-11 09:44:56.343907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:11.928 [2024-10-11 09:44:56.344158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:11.928 [2024-10-11 09:44:56.344212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:11.928 [2024-10-11 09:44:56.344573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.928 BaseBdev3 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.928 [ 00:11:11.928 { 00:11:11.928 "name": "BaseBdev3", 00:11:11.928 "aliases": [ 00:11:11.928 "5f36fb40-2a17-4765-8a0c-b40da60ad9d0" 00:11:11.928 ], 00:11:11.928 "product_name": "Malloc disk", 00:11:11.928 "block_size": 512, 00:11:11.928 "num_blocks": 65536, 00:11:11.928 "uuid": "5f36fb40-2a17-4765-8a0c-b40da60ad9d0", 00:11:11.928 "assigned_rate_limits": { 00:11:11.928 "rw_ios_per_sec": 0, 00:11:11.928 "rw_mbytes_per_sec": 0, 00:11:11.928 "r_mbytes_per_sec": 0, 00:11:11.928 "w_mbytes_per_sec": 0 00:11:11.928 }, 00:11:11.928 "claimed": true, 00:11:11.928 "claim_type": "exclusive_write", 00:11:11.928 "zoned": false, 00:11:11.928 "supported_io_types": { 00:11:11.928 "read": true, 00:11:11.928 "write": true, 00:11:11.928 "unmap": true, 00:11:11.928 "flush": true, 00:11:11.928 "reset": true, 00:11:11.928 "nvme_admin": false, 00:11:11.928 "nvme_io": false, 00:11:11.928 "nvme_io_md": false, 00:11:11.928 "write_zeroes": true, 00:11:11.928 "zcopy": true, 00:11:11.928 "get_zone_info": false, 00:11:11.928 "zone_management": false, 00:11:11.928 "zone_append": false, 00:11:11.928 "compare": false, 00:11:11.928 "compare_and_write": false, 00:11:11.928 "abort": true, 00:11:11.928 "seek_hole": false, 00:11:11.928 "seek_data": false, 00:11:11.928 "copy": true, 00:11:11.928 "nvme_iov_md": false 00:11:11.928 }, 00:11:11.928 "memory_domains": [ 00:11:11.928 { 00:11:11.928 "dma_device_id": "system", 00:11:11.928 "dma_device_type": 1 00:11:11.928 }, 00:11:11.928 { 00:11:11.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.928 "dma_device_type": 2 00:11:11.928 } 00:11:11.928 ], 00:11:11.928 "driver_specific": {} 00:11:11.928 } 00:11:11.928 ] 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.928 "name": "Existed_Raid", 00:11:11.928 "uuid": "ed0733dc-98f1-42d2-bd63-df0e551385b7", 00:11:11.928 "strip_size_kb": 0, 00:11:11.928 "state": "online", 00:11:11.928 "raid_level": "raid1", 00:11:11.928 "superblock": false, 00:11:11.928 "num_base_bdevs": 3, 00:11:11.928 "num_base_bdevs_discovered": 3, 00:11:11.928 "num_base_bdevs_operational": 3, 00:11:11.928 "base_bdevs_list": [ 00:11:11.928 { 00:11:11.928 "name": "BaseBdev1", 00:11:11.928 "uuid": "d17b3aa0-58c3-4882-aa09-5b0a9e512b41", 00:11:11.928 "is_configured": true, 00:11:11.928 "data_offset": 0, 00:11:11.928 "data_size": 65536 00:11:11.928 }, 00:11:11.928 { 00:11:11.928 "name": "BaseBdev2", 00:11:11.928 "uuid": "56ff9f10-e532-4f2a-ab99-b5be85e1cace", 00:11:11.928 "is_configured": true, 00:11:11.928 "data_offset": 0, 00:11:11.928 "data_size": 65536 00:11:11.928 }, 00:11:11.928 { 00:11:11.928 "name": "BaseBdev3", 00:11:11.928 "uuid": "5f36fb40-2a17-4765-8a0c-b40da60ad9d0", 00:11:11.928 "is_configured": true, 00:11:11.928 "data_offset": 0, 00:11:11.928 "data_size": 65536 00:11:11.928 } 00:11:11.928 ] 00:11:11.928 }' 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.928 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.497 [2024-10-11 09:44:56.838960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.497 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.497 "name": "Existed_Raid", 00:11:12.497 "aliases": [ 00:11:12.497 "ed0733dc-98f1-42d2-bd63-df0e551385b7" 00:11:12.497 ], 00:11:12.497 "product_name": "Raid Volume", 00:11:12.497 "block_size": 512, 00:11:12.497 "num_blocks": 65536, 00:11:12.497 "uuid": "ed0733dc-98f1-42d2-bd63-df0e551385b7", 00:11:12.497 "assigned_rate_limits": { 00:11:12.498 "rw_ios_per_sec": 0, 00:11:12.498 "rw_mbytes_per_sec": 0, 00:11:12.498 "r_mbytes_per_sec": 0, 00:11:12.498 "w_mbytes_per_sec": 0 00:11:12.498 }, 00:11:12.498 "claimed": false, 00:11:12.498 "zoned": false, 00:11:12.498 "supported_io_types": { 00:11:12.498 "read": true, 00:11:12.498 "write": true, 00:11:12.498 "unmap": false, 00:11:12.498 "flush": false, 00:11:12.498 "reset": true, 00:11:12.498 "nvme_admin": false, 00:11:12.498 "nvme_io": false, 00:11:12.498 "nvme_io_md": false, 00:11:12.498 "write_zeroes": true, 00:11:12.498 "zcopy": false, 00:11:12.498 "get_zone_info": false, 00:11:12.498 "zone_management": false, 00:11:12.498 "zone_append": false, 00:11:12.498 "compare": false, 00:11:12.498 "compare_and_write": false, 00:11:12.498 "abort": false, 00:11:12.498 "seek_hole": false, 00:11:12.498 "seek_data": false, 00:11:12.498 "copy": false, 00:11:12.498 "nvme_iov_md": false 00:11:12.498 }, 00:11:12.498 "memory_domains": [ 00:11:12.498 { 00:11:12.498 "dma_device_id": "system", 00:11:12.498 "dma_device_type": 1 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.498 "dma_device_type": 2 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "dma_device_id": "system", 00:11:12.498 "dma_device_type": 1 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.498 "dma_device_type": 2 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "dma_device_id": "system", 00:11:12.498 "dma_device_type": 1 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.498 "dma_device_type": 2 00:11:12.498 } 00:11:12.498 ], 00:11:12.498 "driver_specific": { 00:11:12.498 "raid": { 00:11:12.498 "uuid": "ed0733dc-98f1-42d2-bd63-df0e551385b7", 00:11:12.498 "strip_size_kb": 0, 00:11:12.498 "state": "online", 00:11:12.498 "raid_level": "raid1", 00:11:12.498 "superblock": false, 00:11:12.498 "num_base_bdevs": 3, 00:11:12.498 "num_base_bdevs_discovered": 3, 00:11:12.498 "num_base_bdevs_operational": 3, 00:11:12.498 "base_bdevs_list": [ 00:11:12.498 { 00:11:12.498 "name": "BaseBdev1", 00:11:12.498 "uuid": "d17b3aa0-58c3-4882-aa09-5b0a9e512b41", 00:11:12.498 "is_configured": true, 00:11:12.498 "data_offset": 0, 00:11:12.498 "data_size": 65536 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "name": "BaseBdev2", 00:11:12.498 "uuid": "56ff9f10-e532-4f2a-ab99-b5be85e1cace", 00:11:12.498 "is_configured": true, 00:11:12.498 "data_offset": 0, 00:11:12.498 "data_size": 65536 00:11:12.498 }, 00:11:12.498 { 00:11:12.498 "name": "BaseBdev3", 00:11:12.498 "uuid": "5f36fb40-2a17-4765-8a0c-b40da60ad9d0", 00:11:12.498 "is_configured": true, 00:11:12.498 "data_offset": 0, 00:11:12.498 "data_size": 65536 00:11:12.498 } 00:11:12.498 ] 00:11:12.498 } 00:11:12.498 } 00:11:12.498 }' 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:12.498 BaseBdev2 00:11:12.498 BaseBdev3' 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.498 09:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.498 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.498 [2024-10-11 09:44:57.118166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.758 "name": "Existed_Raid", 00:11:12.758 "uuid": "ed0733dc-98f1-42d2-bd63-df0e551385b7", 00:11:12.758 "strip_size_kb": 0, 00:11:12.758 "state": "online", 00:11:12.758 "raid_level": "raid1", 00:11:12.758 "superblock": false, 00:11:12.758 "num_base_bdevs": 3, 00:11:12.758 "num_base_bdevs_discovered": 2, 00:11:12.758 "num_base_bdevs_operational": 2, 00:11:12.758 "base_bdevs_list": [ 00:11:12.758 { 00:11:12.758 "name": null, 00:11:12.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.758 "is_configured": false, 00:11:12.758 "data_offset": 0, 00:11:12.758 "data_size": 65536 00:11:12.758 }, 00:11:12.758 { 00:11:12.758 "name": "BaseBdev2", 00:11:12.758 "uuid": "56ff9f10-e532-4f2a-ab99-b5be85e1cace", 00:11:12.758 "is_configured": true, 00:11:12.758 "data_offset": 0, 00:11:12.758 "data_size": 65536 00:11:12.758 }, 00:11:12.758 { 00:11:12.758 "name": "BaseBdev3", 00:11:12.758 "uuid": "5f36fb40-2a17-4765-8a0c-b40da60ad9d0", 00:11:12.758 "is_configured": true, 00:11:12.758 "data_offset": 0, 00:11:12.758 "data_size": 65536 00:11:12.758 } 00:11:12.758 ] 00:11:12.758 }' 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.758 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.327 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:13.327 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.327 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.327 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.328 [2024-10-11 09:44:57.755968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.328 09:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.328 [2024-10-11 09:44:57.918809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:13.328 [2024-10-11 09:44:57.919000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.587 [2024-10-11 09:44:58.023061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.587 [2024-10-11 09:44:58.023207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.587 [2024-10-11 09:44:58.023251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:13.587 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.588 BaseBdev2 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.588 [ 00:11:13.588 { 00:11:13.588 "name": "BaseBdev2", 00:11:13.588 "aliases": [ 00:11:13.588 "01c8b4c8-1d91-4806-965a-ca31317d7369" 00:11:13.588 ], 00:11:13.588 "product_name": "Malloc disk", 00:11:13.588 "block_size": 512, 00:11:13.588 "num_blocks": 65536, 00:11:13.588 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:13.588 "assigned_rate_limits": { 00:11:13.588 "rw_ios_per_sec": 0, 00:11:13.588 "rw_mbytes_per_sec": 0, 00:11:13.588 "r_mbytes_per_sec": 0, 00:11:13.588 "w_mbytes_per_sec": 0 00:11:13.588 }, 00:11:13.588 "claimed": false, 00:11:13.588 "zoned": false, 00:11:13.588 "supported_io_types": { 00:11:13.588 "read": true, 00:11:13.588 "write": true, 00:11:13.588 "unmap": true, 00:11:13.588 "flush": true, 00:11:13.588 "reset": true, 00:11:13.588 "nvme_admin": false, 00:11:13.588 "nvme_io": false, 00:11:13.588 "nvme_io_md": false, 00:11:13.588 "write_zeroes": true, 00:11:13.588 "zcopy": true, 00:11:13.588 "get_zone_info": false, 00:11:13.588 "zone_management": false, 00:11:13.588 "zone_append": false, 00:11:13.588 "compare": false, 00:11:13.588 "compare_and_write": false, 00:11:13.588 "abort": true, 00:11:13.588 "seek_hole": false, 00:11:13.588 "seek_data": false, 00:11:13.588 "copy": true, 00:11:13.588 "nvme_iov_md": false 00:11:13.588 }, 00:11:13.588 "memory_domains": [ 00:11:13.588 { 00:11:13.588 "dma_device_id": "system", 00:11:13.588 "dma_device_type": 1 00:11:13.588 }, 00:11:13.588 { 00:11:13.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.588 "dma_device_type": 2 00:11:13.588 } 00:11:13.588 ], 00:11:13.588 "driver_specific": {} 00:11:13.588 } 00:11:13.588 ] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.588 BaseBdev3 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.588 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.847 [ 00:11:13.847 { 00:11:13.847 "name": "BaseBdev3", 00:11:13.847 "aliases": [ 00:11:13.847 "20287a9c-bf34-427f-93d7-0e05956d7af1" 00:11:13.847 ], 00:11:13.847 "product_name": "Malloc disk", 00:11:13.847 "block_size": 512, 00:11:13.847 "num_blocks": 65536, 00:11:13.847 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:13.847 "assigned_rate_limits": { 00:11:13.847 "rw_ios_per_sec": 0, 00:11:13.847 "rw_mbytes_per_sec": 0, 00:11:13.847 "r_mbytes_per_sec": 0, 00:11:13.847 "w_mbytes_per_sec": 0 00:11:13.847 }, 00:11:13.847 "claimed": false, 00:11:13.847 "zoned": false, 00:11:13.847 "supported_io_types": { 00:11:13.847 "read": true, 00:11:13.847 "write": true, 00:11:13.847 "unmap": true, 00:11:13.847 "flush": true, 00:11:13.847 "reset": true, 00:11:13.847 "nvme_admin": false, 00:11:13.847 "nvme_io": false, 00:11:13.847 "nvme_io_md": false, 00:11:13.847 "write_zeroes": true, 00:11:13.847 "zcopy": true, 00:11:13.847 "get_zone_info": false, 00:11:13.847 "zone_management": false, 00:11:13.847 "zone_append": false, 00:11:13.847 "compare": false, 00:11:13.847 "compare_and_write": false, 00:11:13.847 "abort": true, 00:11:13.847 "seek_hole": false, 00:11:13.847 "seek_data": false, 00:11:13.847 "copy": true, 00:11:13.847 "nvme_iov_md": false 00:11:13.847 }, 00:11:13.847 "memory_domains": [ 00:11:13.847 { 00:11:13.847 "dma_device_id": "system", 00:11:13.847 "dma_device_type": 1 00:11:13.847 }, 00:11:13.847 { 00:11:13.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.847 "dma_device_type": 2 00:11:13.847 } 00:11:13.847 ], 00:11:13.847 "driver_specific": {} 00:11:13.847 } 00:11:13.847 ] 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.847 [2024-10-11 09:44:58.252798] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.847 [2024-10-11 09:44:58.252853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.847 [2024-10-11 09:44:58.252881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.847 [2024-10-11 09:44:58.255076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.847 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.847 "name": "Existed_Raid", 00:11:13.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.847 "strip_size_kb": 0, 00:11:13.847 "state": "configuring", 00:11:13.847 "raid_level": "raid1", 00:11:13.847 "superblock": false, 00:11:13.847 "num_base_bdevs": 3, 00:11:13.847 "num_base_bdevs_discovered": 2, 00:11:13.847 "num_base_bdevs_operational": 3, 00:11:13.847 "base_bdevs_list": [ 00:11:13.847 { 00:11:13.847 "name": "BaseBdev1", 00:11:13.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.847 "is_configured": false, 00:11:13.847 "data_offset": 0, 00:11:13.847 "data_size": 0 00:11:13.847 }, 00:11:13.848 { 00:11:13.848 "name": "BaseBdev2", 00:11:13.848 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:13.848 "is_configured": true, 00:11:13.848 "data_offset": 0, 00:11:13.848 "data_size": 65536 00:11:13.848 }, 00:11:13.848 { 00:11:13.848 "name": "BaseBdev3", 00:11:13.848 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:13.848 "is_configured": true, 00:11:13.848 "data_offset": 0, 00:11:13.848 "data_size": 65536 00:11:13.848 } 00:11:13.848 ] 00:11:13.848 }' 00:11:13.848 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.848 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.106 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:14.106 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.106 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.106 [2024-10-11 09:44:58.719995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.106 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.106 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.106 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.107 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.366 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.366 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.366 "name": "Existed_Raid", 00:11:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.366 "strip_size_kb": 0, 00:11:14.366 "state": "configuring", 00:11:14.366 "raid_level": "raid1", 00:11:14.366 "superblock": false, 00:11:14.366 "num_base_bdevs": 3, 00:11:14.366 "num_base_bdevs_discovered": 1, 00:11:14.366 "num_base_bdevs_operational": 3, 00:11:14.366 "base_bdevs_list": [ 00:11:14.366 { 00:11:14.366 "name": "BaseBdev1", 00:11:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.366 "is_configured": false, 00:11:14.366 "data_offset": 0, 00:11:14.366 "data_size": 0 00:11:14.366 }, 00:11:14.366 { 00:11:14.366 "name": null, 00:11:14.366 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:14.366 "is_configured": false, 00:11:14.366 "data_offset": 0, 00:11:14.366 "data_size": 65536 00:11:14.366 }, 00:11:14.366 { 00:11:14.366 "name": "BaseBdev3", 00:11:14.366 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:14.366 "is_configured": true, 00:11:14.366 "data_offset": 0, 00:11:14.366 "data_size": 65536 00:11:14.366 } 00:11:14.366 ] 00:11:14.366 }' 00:11:14.366 09:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.366 09:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.624 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.884 [2024-10-11 09:44:59.268728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.884 BaseBdev1 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.884 [ 00:11:14.884 { 00:11:14.884 "name": "BaseBdev1", 00:11:14.884 "aliases": [ 00:11:14.884 "308f7051-0043-4312-b870-ae9b0db8da62" 00:11:14.884 ], 00:11:14.884 "product_name": "Malloc disk", 00:11:14.884 "block_size": 512, 00:11:14.884 "num_blocks": 65536, 00:11:14.884 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:14.884 "assigned_rate_limits": { 00:11:14.884 "rw_ios_per_sec": 0, 00:11:14.884 "rw_mbytes_per_sec": 0, 00:11:14.884 "r_mbytes_per_sec": 0, 00:11:14.884 "w_mbytes_per_sec": 0 00:11:14.884 }, 00:11:14.884 "claimed": true, 00:11:14.884 "claim_type": "exclusive_write", 00:11:14.884 "zoned": false, 00:11:14.884 "supported_io_types": { 00:11:14.884 "read": true, 00:11:14.884 "write": true, 00:11:14.884 "unmap": true, 00:11:14.884 "flush": true, 00:11:14.884 "reset": true, 00:11:14.884 "nvme_admin": false, 00:11:14.884 "nvme_io": false, 00:11:14.884 "nvme_io_md": false, 00:11:14.884 "write_zeroes": true, 00:11:14.884 "zcopy": true, 00:11:14.884 "get_zone_info": false, 00:11:14.884 "zone_management": false, 00:11:14.884 "zone_append": false, 00:11:14.884 "compare": false, 00:11:14.884 "compare_and_write": false, 00:11:14.884 "abort": true, 00:11:14.884 "seek_hole": false, 00:11:14.884 "seek_data": false, 00:11:14.884 "copy": true, 00:11:14.884 "nvme_iov_md": false 00:11:14.884 }, 00:11:14.884 "memory_domains": [ 00:11:14.884 { 00:11:14.884 "dma_device_id": "system", 00:11:14.884 "dma_device_type": 1 00:11:14.884 }, 00:11:14.884 { 00:11:14.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.884 "dma_device_type": 2 00:11:14.884 } 00:11:14.884 ], 00:11:14.884 "driver_specific": {} 00:11:14.884 } 00:11:14.884 ] 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.884 "name": "Existed_Raid", 00:11:14.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.884 "strip_size_kb": 0, 00:11:14.884 "state": "configuring", 00:11:14.884 "raid_level": "raid1", 00:11:14.884 "superblock": false, 00:11:14.884 "num_base_bdevs": 3, 00:11:14.884 "num_base_bdevs_discovered": 2, 00:11:14.884 "num_base_bdevs_operational": 3, 00:11:14.884 "base_bdevs_list": [ 00:11:14.884 { 00:11:14.884 "name": "BaseBdev1", 00:11:14.884 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:14.884 "is_configured": true, 00:11:14.884 "data_offset": 0, 00:11:14.884 "data_size": 65536 00:11:14.884 }, 00:11:14.884 { 00:11:14.884 "name": null, 00:11:14.884 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:14.884 "is_configured": false, 00:11:14.884 "data_offset": 0, 00:11:14.884 "data_size": 65536 00:11:14.884 }, 00:11:14.884 { 00:11:14.884 "name": "BaseBdev3", 00:11:14.884 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:14.884 "is_configured": true, 00:11:14.884 "data_offset": 0, 00:11:14.884 "data_size": 65536 00:11:14.884 } 00:11:14.884 ] 00:11:14.884 }' 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.884 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.143 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 [2024-10-11 09:44:59.771945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.400 "name": "Existed_Raid", 00:11:15.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.400 "strip_size_kb": 0, 00:11:15.400 "state": "configuring", 00:11:15.400 "raid_level": "raid1", 00:11:15.400 "superblock": false, 00:11:15.400 "num_base_bdevs": 3, 00:11:15.400 "num_base_bdevs_discovered": 1, 00:11:15.400 "num_base_bdevs_operational": 3, 00:11:15.400 "base_bdevs_list": [ 00:11:15.400 { 00:11:15.400 "name": "BaseBdev1", 00:11:15.400 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:15.400 "is_configured": true, 00:11:15.400 "data_offset": 0, 00:11:15.400 "data_size": 65536 00:11:15.400 }, 00:11:15.400 { 00:11:15.400 "name": null, 00:11:15.400 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:15.400 "is_configured": false, 00:11:15.400 "data_offset": 0, 00:11:15.400 "data_size": 65536 00:11:15.400 }, 00:11:15.400 { 00:11:15.400 "name": null, 00:11:15.400 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:15.400 "is_configured": false, 00:11:15.400 "data_offset": 0, 00:11:15.400 "data_size": 65536 00:11:15.400 } 00:11:15.400 ] 00:11:15.400 }' 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.400 09:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.660 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.660 [2024-10-11 09:45:00.287295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.918 "name": "Existed_Raid", 00:11:15.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.918 "strip_size_kb": 0, 00:11:15.918 "state": "configuring", 00:11:15.918 "raid_level": "raid1", 00:11:15.918 "superblock": false, 00:11:15.918 "num_base_bdevs": 3, 00:11:15.918 "num_base_bdevs_discovered": 2, 00:11:15.918 "num_base_bdevs_operational": 3, 00:11:15.918 "base_bdevs_list": [ 00:11:15.918 { 00:11:15.918 "name": "BaseBdev1", 00:11:15.918 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:15.918 "is_configured": true, 00:11:15.918 "data_offset": 0, 00:11:15.918 "data_size": 65536 00:11:15.918 }, 00:11:15.918 { 00:11:15.918 "name": null, 00:11:15.918 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:15.918 "is_configured": false, 00:11:15.918 "data_offset": 0, 00:11:15.918 "data_size": 65536 00:11:15.918 }, 00:11:15.918 { 00:11:15.918 "name": "BaseBdev3", 00:11:15.918 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:15.918 "is_configured": true, 00:11:15.918 "data_offset": 0, 00:11:15.918 "data_size": 65536 00:11:15.918 } 00:11:15.918 ] 00:11:15.918 }' 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.918 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.176 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.176 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.176 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.176 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.176 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.434 [2024-10-11 09:45:00.830467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.434 "name": "Existed_Raid", 00:11:16.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.434 "strip_size_kb": 0, 00:11:16.434 "state": "configuring", 00:11:16.434 "raid_level": "raid1", 00:11:16.434 "superblock": false, 00:11:16.434 "num_base_bdevs": 3, 00:11:16.434 "num_base_bdevs_discovered": 1, 00:11:16.434 "num_base_bdevs_operational": 3, 00:11:16.434 "base_bdevs_list": [ 00:11:16.434 { 00:11:16.434 "name": null, 00:11:16.434 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:16.434 "is_configured": false, 00:11:16.434 "data_offset": 0, 00:11:16.434 "data_size": 65536 00:11:16.434 }, 00:11:16.434 { 00:11:16.434 "name": null, 00:11:16.434 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:16.434 "is_configured": false, 00:11:16.434 "data_offset": 0, 00:11:16.434 "data_size": 65536 00:11:16.434 }, 00:11:16.434 { 00:11:16.434 "name": "BaseBdev3", 00:11:16.434 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:16.434 "is_configured": true, 00:11:16.434 "data_offset": 0, 00:11:16.434 "data_size": 65536 00:11:16.434 } 00:11:16.434 ] 00:11:16.434 }' 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.434 09:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.001 [2024-10-11 09:45:01.454878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.001 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.002 "name": "Existed_Raid", 00:11:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.002 "strip_size_kb": 0, 00:11:17.002 "state": "configuring", 00:11:17.002 "raid_level": "raid1", 00:11:17.002 "superblock": false, 00:11:17.002 "num_base_bdevs": 3, 00:11:17.002 "num_base_bdevs_discovered": 2, 00:11:17.002 "num_base_bdevs_operational": 3, 00:11:17.002 "base_bdevs_list": [ 00:11:17.002 { 00:11:17.002 "name": null, 00:11:17.002 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:17.002 "is_configured": false, 00:11:17.002 "data_offset": 0, 00:11:17.002 "data_size": 65536 00:11:17.002 }, 00:11:17.002 { 00:11:17.002 "name": "BaseBdev2", 00:11:17.002 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:17.002 "is_configured": true, 00:11:17.002 "data_offset": 0, 00:11:17.002 "data_size": 65536 00:11:17.002 }, 00:11:17.002 { 00:11:17.002 "name": "BaseBdev3", 00:11:17.002 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:17.002 "is_configured": true, 00:11:17.002 "data_offset": 0, 00:11:17.002 "data_size": 65536 00:11:17.002 } 00:11:17.002 ] 00:11:17.002 }' 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.002 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.570 09:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 308f7051-0043-4312-b870-ae9b0db8da62 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.571 [2024-10-11 09:45:02.081361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:17.571 [2024-10-11 09:45:02.081438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.571 [2024-10-11 09:45:02.081447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:17.571 [2024-10-11 09:45:02.081748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:17.571 [2024-10-11 09:45:02.081981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.571 [2024-10-11 09:45:02.082016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:17.571 [2024-10-11 09:45:02.082322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.571 NewBaseBdev 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.571 [ 00:11:17.571 { 00:11:17.571 "name": "NewBaseBdev", 00:11:17.571 "aliases": [ 00:11:17.571 "308f7051-0043-4312-b870-ae9b0db8da62" 00:11:17.571 ], 00:11:17.571 "product_name": "Malloc disk", 00:11:17.571 "block_size": 512, 00:11:17.571 "num_blocks": 65536, 00:11:17.571 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:17.571 "assigned_rate_limits": { 00:11:17.571 "rw_ios_per_sec": 0, 00:11:17.571 "rw_mbytes_per_sec": 0, 00:11:17.571 "r_mbytes_per_sec": 0, 00:11:17.571 "w_mbytes_per_sec": 0 00:11:17.571 }, 00:11:17.571 "claimed": true, 00:11:17.571 "claim_type": "exclusive_write", 00:11:17.571 "zoned": false, 00:11:17.571 "supported_io_types": { 00:11:17.571 "read": true, 00:11:17.571 "write": true, 00:11:17.571 "unmap": true, 00:11:17.571 "flush": true, 00:11:17.571 "reset": true, 00:11:17.571 "nvme_admin": false, 00:11:17.571 "nvme_io": false, 00:11:17.571 "nvme_io_md": false, 00:11:17.571 "write_zeroes": true, 00:11:17.571 "zcopy": true, 00:11:17.571 "get_zone_info": false, 00:11:17.571 "zone_management": false, 00:11:17.571 "zone_append": false, 00:11:17.571 "compare": false, 00:11:17.571 "compare_and_write": false, 00:11:17.571 "abort": true, 00:11:17.571 "seek_hole": false, 00:11:17.571 "seek_data": false, 00:11:17.571 "copy": true, 00:11:17.571 "nvme_iov_md": false 00:11:17.571 }, 00:11:17.571 "memory_domains": [ 00:11:17.571 { 00:11:17.571 "dma_device_id": "system", 00:11:17.571 "dma_device_type": 1 00:11:17.571 }, 00:11:17.571 { 00:11:17.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.571 "dma_device_type": 2 00:11:17.571 } 00:11:17.571 ], 00:11:17.571 "driver_specific": {} 00:11:17.571 } 00:11:17.571 ] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.571 "name": "Existed_Raid", 00:11:17.571 "uuid": "41642508-8714-4acd-9370-f47a2719d609", 00:11:17.571 "strip_size_kb": 0, 00:11:17.571 "state": "online", 00:11:17.571 "raid_level": "raid1", 00:11:17.571 "superblock": false, 00:11:17.571 "num_base_bdevs": 3, 00:11:17.571 "num_base_bdevs_discovered": 3, 00:11:17.571 "num_base_bdevs_operational": 3, 00:11:17.571 "base_bdevs_list": [ 00:11:17.571 { 00:11:17.571 "name": "NewBaseBdev", 00:11:17.571 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:17.571 "is_configured": true, 00:11:17.571 "data_offset": 0, 00:11:17.571 "data_size": 65536 00:11:17.571 }, 00:11:17.571 { 00:11:17.571 "name": "BaseBdev2", 00:11:17.571 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:17.571 "is_configured": true, 00:11:17.571 "data_offset": 0, 00:11:17.571 "data_size": 65536 00:11:17.571 }, 00:11:17.571 { 00:11:17.571 "name": "BaseBdev3", 00:11:17.571 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:17.571 "is_configured": true, 00:11:17.571 "data_offset": 0, 00:11:17.571 "data_size": 65536 00:11:17.571 } 00:11:17.571 ] 00:11:17.571 }' 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.571 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 [2024-10-11 09:45:02.608907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.139 "name": "Existed_Raid", 00:11:18.139 "aliases": [ 00:11:18.139 "41642508-8714-4acd-9370-f47a2719d609" 00:11:18.139 ], 00:11:18.139 "product_name": "Raid Volume", 00:11:18.139 "block_size": 512, 00:11:18.139 "num_blocks": 65536, 00:11:18.139 "uuid": "41642508-8714-4acd-9370-f47a2719d609", 00:11:18.139 "assigned_rate_limits": { 00:11:18.139 "rw_ios_per_sec": 0, 00:11:18.139 "rw_mbytes_per_sec": 0, 00:11:18.139 "r_mbytes_per_sec": 0, 00:11:18.139 "w_mbytes_per_sec": 0 00:11:18.139 }, 00:11:18.139 "claimed": false, 00:11:18.139 "zoned": false, 00:11:18.139 "supported_io_types": { 00:11:18.139 "read": true, 00:11:18.139 "write": true, 00:11:18.139 "unmap": false, 00:11:18.139 "flush": false, 00:11:18.139 "reset": true, 00:11:18.139 "nvme_admin": false, 00:11:18.139 "nvme_io": false, 00:11:18.139 "nvme_io_md": false, 00:11:18.139 "write_zeroes": true, 00:11:18.139 "zcopy": false, 00:11:18.139 "get_zone_info": false, 00:11:18.139 "zone_management": false, 00:11:18.139 "zone_append": false, 00:11:18.139 "compare": false, 00:11:18.139 "compare_and_write": false, 00:11:18.139 "abort": false, 00:11:18.139 "seek_hole": false, 00:11:18.139 "seek_data": false, 00:11:18.139 "copy": false, 00:11:18.139 "nvme_iov_md": false 00:11:18.139 }, 00:11:18.139 "memory_domains": [ 00:11:18.139 { 00:11:18.139 "dma_device_id": "system", 00:11:18.139 "dma_device_type": 1 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.139 "dma_device_type": 2 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "dma_device_id": "system", 00:11:18.139 "dma_device_type": 1 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.139 "dma_device_type": 2 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "dma_device_id": "system", 00:11:18.139 "dma_device_type": 1 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.139 "dma_device_type": 2 00:11:18.139 } 00:11:18.139 ], 00:11:18.139 "driver_specific": { 00:11:18.139 "raid": { 00:11:18.139 "uuid": "41642508-8714-4acd-9370-f47a2719d609", 00:11:18.139 "strip_size_kb": 0, 00:11:18.139 "state": "online", 00:11:18.139 "raid_level": "raid1", 00:11:18.139 "superblock": false, 00:11:18.139 "num_base_bdevs": 3, 00:11:18.139 "num_base_bdevs_discovered": 3, 00:11:18.139 "num_base_bdevs_operational": 3, 00:11:18.139 "base_bdevs_list": [ 00:11:18.139 { 00:11:18.139 "name": "NewBaseBdev", 00:11:18.139 "uuid": "308f7051-0043-4312-b870-ae9b0db8da62", 00:11:18.139 "is_configured": true, 00:11:18.139 "data_offset": 0, 00:11:18.139 "data_size": 65536 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "name": "BaseBdev2", 00:11:18.139 "uuid": "01c8b4c8-1d91-4806-965a-ca31317d7369", 00:11:18.139 "is_configured": true, 00:11:18.139 "data_offset": 0, 00:11:18.139 "data_size": 65536 00:11:18.139 }, 00:11:18.139 { 00:11:18.139 "name": "BaseBdev3", 00:11:18.139 "uuid": "20287a9c-bf34-427f-93d7-0e05956d7af1", 00:11:18.139 "is_configured": true, 00:11:18.139 "data_offset": 0, 00:11:18.139 "data_size": 65536 00:11:18.139 } 00:11:18.139 ] 00:11:18.139 } 00:11:18.139 } 00:11:18.139 }' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:18.139 BaseBdev2 00:11:18.139 BaseBdev3' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.399 [2024-10-11 09:45:02.836139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.399 [2024-10-11 09:45:02.836182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.399 [2024-10-11 09:45:02.836279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.399 [2024-10-11 09:45:02.836633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.399 [2024-10-11 09:45:02.836659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67848 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67848 ']' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67848 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67848 00:11:18.399 killing process with pid 67848 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67848' 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67848 00:11:18.399 [2024-10-11 09:45:02.883439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.399 09:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67848 00:11:18.658 [2024-10-11 09:45:03.219251] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.067 ************************************ 00:11:20.067 END TEST raid_state_function_test 00:11:20.067 ************************************ 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:20.067 00:11:20.067 real 0m11.266s 00:11:20.067 user 0m17.934s 00:11:20.067 sys 0m1.921s 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.067 09:45:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:20.067 09:45:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:20.067 09:45:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.067 09:45:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.067 ************************************ 00:11:20.067 START TEST raid_state_function_test_sb 00:11:20.067 ************************************ 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68475 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:20.067 Process raid pid: 68475 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68475' 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68475 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68475 ']' 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.067 09:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.067 [2024-10-11 09:45:04.612761] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:20.067 [2024-10-11 09:45:04.612928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.326 [2024-10-11 09:45:04.772399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.326 [2024-10-11 09:45:04.907271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.585 [2024-10-11 09:45:05.157290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.585 [2024-10-11 09:45:05.157350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.151 [2024-10-11 09:45:05.506587] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.151 [2024-10-11 09:45:05.506655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.151 [2024-10-11 09:45:05.506666] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.151 [2024-10-11 09:45:05.506678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.151 [2024-10-11 09:45:05.506686] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.151 [2024-10-11 09:45:05.506695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.151 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.151 "name": "Existed_Raid", 00:11:21.151 "uuid": "d1553d96-e83e-4ba0-9fe1-cc14cf8830c8", 00:11:21.151 "strip_size_kb": 0, 00:11:21.151 "state": "configuring", 00:11:21.151 "raid_level": "raid1", 00:11:21.151 "superblock": true, 00:11:21.151 "num_base_bdevs": 3, 00:11:21.151 "num_base_bdevs_discovered": 0, 00:11:21.151 "num_base_bdevs_operational": 3, 00:11:21.151 "base_bdevs_list": [ 00:11:21.151 { 00:11:21.151 "name": "BaseBdev1", 00:11:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.151 "is_configured": false, 00:11:21.151 "data_offset": 0, 00:11:21.151 "data_size": 0 00:11:21.151 }, 00:11:21.151 { 00:11:21.151 "name": "BaseBdev2", 00:11:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.151 "is_configured": false, 00:11:21.151 "data_offset": 0, 00:11:21.151 "data_size": 0 00:11:21.151 }, 00:11:21.151 { 00:11:21.151 "name": "BaseBdev3", 00:11:21.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.152 "is_configured": false, 00:11:21.152 "data_offset": 0, 00:11:21.152 "data_size": 0 00:11:21.152 } 00:11:21.152 ] 00:11:21.152 }' 00:11:21.152 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.152 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 [2024-10-11 09:45:05.949770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.410 [2024-10-11 09:45:05.949812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 [2024-10-11 09:45:05.961764] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.410 [2024-10-11 09:45:05.961809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.410 [2024-10-11 09:45:05.961818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.410 [2024-10-11 09:45:05.961829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.410 [2024-10-11 09:45:05.961836] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.410 [2024-10-11 09:45:05.961846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.410 09:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 [2024-10-11 09:45:06.018646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.410 BaseBdev1 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.410 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.669 [ 00:11:21.669 { 00:11:21.669 "name": "BaseBdev1", 00:11:21.669 "aliases": [ 00:11:21.669 "799cde0d-8b63-41a7-84ef-2203ad4bd711" 00:11:21.669 ], 00:11:21.669 "product_name": "Malloc disk", 00:11:21.669 "block_size": 512, 00:11:21.669 "num_blocks": 65536, 00:11:21.669 "uuid": "799cde0d-8b63-41a7-84ef-2203ad4bd711", 00:11:21.669 "assigned_rate_limits": { 00:11:21.669 "rw_ios_per_sec": 0, 00:11:21.669 "rw_mbytes_per_sec": 0, 00:11:21.669 "r_mbytes_per_sec": 0, 00:11:21.669 "w_mbytes_per_sec": 0 00:11:21.669 }, 00:11:21.669 "claimed": true, 00:11:21.669 "claim_type": "exclusive_write", 00:11:21.669 "zoned": false, 00:11:21.669 "supported_io_types": { 00:11:21.669 "read": true, 00:11:21.669 "write": true, 00:11:21.669 "unmap": true, 00:11:21.669 "flush": true, 00:11:21.669 "reset": true, 00:11:21.669 "nvme_admin": false, 00:11:21.669 "nvme_io": false, 00:11:21.669 "nvme_io_md": false, 00:11:21.669 "write_zeroes": true, 00:11:21.669 "zcopy": true, 00:11:21.669 "get_zone_info": false, 00:11:21.669 "zone_management": false, 00:11:21.669 "zone_append": false, 00:11:21.669 "compare": false, 00:11:21.669 "compare_and_write": false, 00:11:21.669 "abort": true, 00:11:21.669 "seek_hole": false, 00:11:21.669 "seek_data": false, 00:11:21.669 "copy": true, 00:11:21.669 "nvme_iov_md": false 00:11:21.669 }, 00:11:21.669 "memory_domains": [ 00:11:21.669 { 00:11:21.669 "dma_device_id": "system", 00:11:21.669 "dma_device_type": 1 00:11:21.669 }, 00:11:21.669 { 00:11:21.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.669 "dma_device_type": 2 00:11:21.669 } 00:11:21.669 ], 00:11:21.669 "driver_specific": {} 00:11:21.669 } 00:11:21.669 ] 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.669 "name": "Existed_Raid", 00:11:21.669 "uuid": "12417f11-0e50-4877-8184-50c5f58fe364", 00:11:21.669 "strip_size_kb": 0, 00:11:21.669 "state": "configuring", 00:11:21.669 "raid_level": "raid1", 00:11:21.669 "superblock": true, 00:11:21.669 "num_base_bdevs": 3, 00:11:21.669 "num_base_bdevs_discovered": 1, 00:11:21.669 "num_base_bdevs_operational": 3, 00:11:21.669 "base_bdevs_list": [ 00:11:21.669 { 00:11:21.669 "name": "BaseBdev1", 00:11:21.669 "uuid": "799cde0d-8b63-41a7-84ef-2203ad4bd711", 00:11:21.669 "is_configured": true, 00:11:21.669 "data_offset": 2048, 00:11:21.669 "data_size": 63488 00:11:21.669 }, 00:11:21.669 { 00:11:21.669 "name": "BaseBdev2", 00:11:21.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.669 "is_configured": false, 00:11:21.669 "data_offset": 0, 00:11:21.669 "data_size": 0 00:11:21.669 }, 00:11:21.669 { 00:11:21.669 "name": "BaseBdev3", 00:11:21.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.669 "is_configured": false, 00:11:21.669 "data_offset": 0, 00:11:21.669 "data_size": 0 00:11:21.669 } 00:11:21.669 ] 00:11:21.669 }' 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.669 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.928 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.928 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.928 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.928 [2024-10-11 09:45:06.505898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.928 [2024-10-11 09:45:06.505963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.929 [2024-10-11 09:45:06.517967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.929 [2024-10-11 09:45:06.520125] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.929 [2024-10-11 09:45:06.520172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.929 [2024-10-11 09:45:06.520184] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.929 [2024-10-11 09:45:06.520195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.929 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.187 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.187 "name": "Existed_Raid", 00:11:22.187 "uuid": "502edccf-f650-430d-acfb-7f4f6c26b2d8", 00:11:22.187 "strip_size_kb": 0, 00:11:22.187 "state": "configuring", 00:11:22.187 "raid_level": "raid1", 00:11:22.187 "superblock": true, 00:11:22.187 "num_base_bdevs": 3, 00:11:22.187 "num_base_bdevs_discovered": 1, 00:11:22.187 "num_base_bdevs_operational": 3, 00:11:22.187 "base_bdevs_list": [ 00:11:22.187 { 00:11:22.187 "name": "BaseBdev1", 00:11:22.187 "uuid": "799cde0d-8b63-41a7-84ef-2203ad4bd711", 00:11:22.187 "is_configured": true, 00:11:22.187 "data_offset": 2048, 00:11:22.187 "data_size": 63488 00:11:22.187 }, 00:11:22.187 { 00:11:22.187 "name": "BaseBdev2", 00:11:22.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.187 "is_configured": false, 00:11:22.187 "data_offset": 0, 00:11:22.187 "data_size": 0 00:11:22.187 }, 00:11:22.187 { 00:11:22.187 "name": "BaseBdev3", 00:11:22.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.187 "is_configured": false, 00:11:22.187 "data_offset": 0, 00:11:22.187 "data_size": 0 00:11:22.187 } 00:11:22.187 ] 00:11:22.187 }' 00:11:22.187 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.187 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.445 09:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.445 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.445 09:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.445 [2024-10-11 09:45:07.032396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.445 BaseBdev2 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.445 [ 00:11:22.445 { 00:11:22.445 "name": "BaseBdev2", 00:11:22.445 "aliases": [ 00:11:22.445 "b83e4e2a-dd4e-4623-9c3c-99f640d62cc8" 00:11:22.445 ], 00:11:22.445 "product_name": "Malloc disk", 00:11:22.445 "block_size": 512, 00:11:22.445 "num_blocks": 65536, 00:11:22.445 "uuid": "b83e4e2a-dd4e-4623-9c3c-99f640d62cc8", 00:11:22.445 "assigned_rate_limits": { 00:11:22.445 "rw_ios_per_sec": 0, 00:11:22.445 "rw_mbytes_per_sec": 0, 00:11:22.445 "r_mbytes_per_sec": 0, 00:11:22.445 "w_mbytes_per_sec": 0 00:11:22.445 }, 00:11:22.445 "claimed": true, 00:11:22.445 "claim_type": "exclusive_write", 00:11:22.445 "zoned": false, 00:11:22.445 "supported_io_types": { 00:11:22.445 "read": true, 00:11:22.445 "write": true, 00:11:22.445 "unmap": true, 00:11:22.445 "flush": true, 00:11:22.445 "reset": true, 00:11:22.445 "nvme_admin": false, 00:11:22.445 "nvme_io": false, 00:11:22.445 "nvme_io_md": false, 00:11:22.445 "write_zeroes": true, 00:11:22.445 "zcopy": true, 00:11:22.445 "get_zone_info": false, 00:11:22.445 "zone_management": false, 00:11:22.445 "zone_append": false, 00:11:22.445 "compare": false, 00:11:22.445 "compare_and_write": false, 00:11:22.445 "abort": true, 00:11:22.445 "seek_hole": false, 00:11:22.445 "seek_data": false, 00:11:22.445 "copy": true, 00:11:22.445 "nvme_iov_md": false 00:11:22.445 }, 00:11:22.445 "memory_domains": [ 00:11:22.445 { 00:11:22.445 "dma_device_id": "system", 00:11:22.445 "dma_device_type": 1 00:11:22.445 }, 00:11:22.445 { 00:11:22.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.445 "dma_device_type": 2 00:11:22.445 } 00:11:22.445 ], 00:11:22.445 "driver_specific": {} 00:11:22.445 } 00:11:22.445 ] 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.445 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.703 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.703 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.703 "name": "Existed_Raid", 00:11:22.703 "uuid": "502edccf-f650-430d-acfb-7f4f6c26b2d8", 00:11:22.703 "strip_size_kb": 0, 00:11:22.703 "state": "configuring", 00:11:22.703 "raid_level": "raid1", 00:11:22.703 "superblock": true, 00:11:22.703 "num_base_bdevs": 3, 00:11:22.703 "num_base_bdevs_discovered": 2, 00:11:22.703 "num_base_bdevs_operational": 3, 00:11:22.703 "base_bdevs_list": [ 00:11:22.703 { 00:11:22.703 "name": "BaseBdev1", 00:11:22.703 "uuid": "799cde0d-8b63-41a7-84ef-2203ad4bd711", 00:11:22.703 "is_configured": true, 00:11:22.703 "data_offset": 2048, 00:11:22.703 "data_size": 63488 00:11:22.703 }, 00:11:22.703 { 00:11:22.703 "name": "BaseBdev2", 00:11:22.703 "uuid": "b83e4e2a-dd4e-4623-9c3c-99f640d62cc8", 00:11:22.703 "is_configured": true, 00:11:22.703 "data_offset": 2048, 00:11:22.703 "data_size": 63488 00:11:22.703 }, 00:11:22.703 { 00:11:22.703 "name": "BaseBdev3", 00:11:22.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.703 "is_configured": false, 00:11:22.703 "data_offset": 0, 00:11:22.703 "data_size": 0 00:11:22.703 } 00:11:22.703 ] 00:11:22.703 }' 00:11:22.703 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.703 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.960 [2024-10-11 09:45:07.561312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.960 [2024-10-11 09:45:07.561647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:22.960 [2024-10-11 09:45:07.561675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.960 [2024-10-11 09:45:07.562024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:22.960 [2024-10-11 09:45:07.562231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:22.960 [2024-10-11 09:45:07.562252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:22.960 BaseBdev3 00:11:22.960 [2024-10-11 09:45:07.562428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.960 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.960 [ 00:11:22.960 { 00:11:22.960 "name": "BaseBdev3", 00:11:22.960 "aliases": [ 00:11:22.960 "2ca29b53-a433-42e3-9a30-5c5ba0b71a20" 00:11:22.960 ], 00:11:22.960 "product_name": "Malloc disk", 00:11:22.960 "block_size": 512, 00:11:22.960 "num_blocks": 65536, 00:11:22.960 "uuid": "2ca29b53-a433-42e3-9a30-5c5ba0b71a20", 00:11:22.960 "assigned_rate_limits": { 00:11:22.960 "rw_ios_per_sec": 0, 00:11:22.960 "rw_mbytes_per_sec": 0, 00:11:22.960 "r_mbytes_per_sec": 0, 00:11:22.960 "w_mbytes_per_sec": 0 00:11:22.960 }, 00:11:22.960 "claimed": true, 00:11:22.960 "claim_type": "exclusive_write", 00:11:22.960 "zoned": false, 00:11:23.217 "supported_io_types": { 00:11:23.217 "read": true, 00:11:23.217 "write": true, 00:11:23.217 "unmap": true, 00:11:23.217 "flush": true, 00:11:23.217 "reset": true, 00:11:23.217 "nvme_admin": false, 00:11:23.217 "nvme_io": false, 00:11:23.217 "nvme_io_md": false, 00:11:23.217 "write_zeroes": true, 00:11:23.217 "zcopy": true, 00:11:23.217 "get_zone_info": false, 00:11:23.217 "zone_management": false, 00:11:23.217 "zone_append": false, 00:11:23.217 "compare": false, 00:11:23.217 "compare_and_write": false, 00:11:23.217 "abort": true, 00:11:23.217 "seek_hole": false, 00:11:23.217 "seek_data": false, 00:11:23.217 "copy": true, 00:11:23.217 "nvme_iov_md": false 00:11:23.217 }, 00:11:23.217 "memory_domains": [ 00:11:23.217 { 00:11:23.217 "dma_device_id": "system", 00:11:23.217 "dma_device_type": 1 00:11:23.217 }, 00:11:23.217 { 00:11:23.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.217 "dma_device_type": 2 00:11:23.217 } 00:11:23.217 ], 00:11:23.217 "driver_specific": {} 00:11:23.217 } 00:11:23.217 ] 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.217 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.218 "name": "Existed_Raid", 00:11:23.218 "uuid": "502edccf-f650-430d-acfb-7f4f6c26b2d8", 00:11:23.218 "strip_size_kb": 0, 00:11:23.218 "state": "online", 00:11:23.218 "raid_level": "raid1", 00:11:23.218 "superblock": true, 00:11:23.218 "num_base_bdevs": 3, 00:11:23.218 "num_base_bdevs_discovered": 3, 00:11:23.218 "num_base_bdevs_operational": 3, 00:11:23.218 "base_bdevs_list": [ 00:11:23.218 { 00:11:23.218 "name": "BaseBdev1", 00:11:23.218 "uuid": "799cde0d-8b63-41a7-84ef-2203ad4bd711", 00:11:23.218 "is_configured": true, 00:11:23.218 "data_offset": 2048, 00:11:23.218 "data_size": 63488 00:11:23.218 }, 00:11:23.218 { 00:11:23.218 "name": "BaseBdev2", 00:11:23.218 "uuid": "b83e4e2a-dd4e-4623-9c3c-99f640d62cc8", 00:11:23.218 "is_configured": true, 00:11:23.218 "data_offset": 2048, 00:11:23.218 "data_size": 63488 00:11:23.218 }, 00:11:23.218 { 00:11:23.218 "name": "BaseBdev3", 00:11:23.218 "uuid": "2ca29b53-a433-42e3-9a30-5c5ba0b71a20", 00:11:23.218 "is_configured": true, 00:11:23.218 "data_offset": 2048, 00:11:23.218 "data_size": 63488 00:11:23.218 } 00:11:23.218 ] 00:11:23.218 }' 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.218 09:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.476 [2024-10-11 09:45:08.060952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.476 "name": "Existed_Raid", 00:11:23.476 "aliases": [ 00:11:23.476 "502edccf-f650-430d-acfb-7f4f6c26b2d8" 00:11:23.476 ], 00:11:23.476 "product_name": "Raid Volume", 00:11:23.476 "block_size": 512, 00:11:23.476 "num_blocks": 63488, 00:11:23.476 "uuid": "502edccf-f650-430d-acfb-7f4f6c26b2d8", 00:11:23.476 "assigned_rate_limits": { 00:11:23.476 "rw_ios_per_sec": 0, 00:11:23.476 "rw_mbytes_per_sec": 0, 00:11:23.476 "r_mbytes_per_sec": 0, 00:11:23.476 "w_mbytes_per_sec": 0 00:11:23.476 }, 00:11:23.476 "claimed": false, 00:11:23.476 "zoned": false, 00:11:23.476 "supported_io_types": { 00:11:23.476 "read": true, 00:11:23.476 "write": true, 00:11:23.476 "unmap": false, 00:11:23.476 "flush": false, 00:11:23.476 "reset": true, 00:11:23.476 "nvme_admin": false, 00:11:23.476 "nvme_io": false, 00:11:23.476 "nvme_io_md": false, 00:11:23.476 "write_zeroes": true, 00:11:23.476 "zcopy": false, 00:11:23.476 "get_zone_info": false, 00:11:23.476 "zone_management": false, 00:11:23.476 "zone_append": false, 00:11:23.476 "compare": false, 00:11:23.476 "compare_and_write": false, 00:11:23.476 "abort": false, 00:11:23.476 "seek_hole": false, 00:11:23.476 "seek_data": false, 00:11:23.476 "copy": false, 00:11:23.476 "nvme_iov_md": false 00:11:23.476 }, 00:11:23.476 "memory_domains": [ 00:11:23.476 { 00:11:23.476 "dma_device_id": "system", 00:11:23.476 "dma_device_type": 1 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.476 "dma_device_type": 2 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "dma_device_id": "system", 00:11:23.476 "dma_device_type": 1 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.476 "dma_device_type": 2 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "dma_device_id": "system", 00:11:23.476 "dma_device_type": 1 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.476 "dma_device_type": 2 00:11:23.476 } 00:11:23.476 ], 00:11:23.476 "driver_specific": { 00:11:23.476 "raid": { 00:11:23.476 "uuid": "502edccf-f650-430d-acfb-7f4f6c26b2d8", 00:11:23.476 "strip_size_kb": 0, 00:11:23.476 "state": "online", 00:11:23.476 "raid_level": "raid1", 00:11:23.476 "superblock": true, 00:11:23.476 "num_base_bdevs": 3, 00:11:23.476 "num_base_bdevs_discovered": 3, 00:11:23.476 "num_base_bdevs_operational": 3, 00:11:23.476 "base_bdevs_list": [ 00:11:23.476 { 00:11:23.476 "name": "BaseBdev1", 00:11:23.476 "uuid": "799cde0d-8b63-41a7-84ef-2203ad4bd711", 00:11:23.476 "is_configured": true, 00:11:23.476 "data_offset": 2048, 00:11:23.476 "data_size": 63488 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "name": "BaseBdev2", 00:11:23.476 "uuid": "b83e4e2a-dd4e-4623-9c3c-99f640d62cc8", 00:11:23.476 "is_configured": true, 00:11:23.476 "data_offset": 2048, 00:11:23.476 "data_size": 63488 00:11:23.476 }, 00:11:23.476 { 00:11:23.476 "name": "BaseBdev3", 00:11:23.476 "uuid": "2ca29b53-a433-42e3-9a30-5c5ba0b71a20", 00:11:23.476 "is_configured": true, 00:11:23.476 "data_offset": 2048, 00:11:23.476 "data_size": 63488 00:11:23.476 } 00:11:23.476 ] 00:11:23.476 } 00:11:23.476 } 00:11:23.476 }' 00:11:23.476 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.734 BaseBdev2 00:11:23.734 BaseBdev3' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.734 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.735 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.735 [2024-10-11 09:45:08.328191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.993 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.994 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.994 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.994 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.994 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.994 "name": "Existed_Raid", 00:11:23.994 "uuid": "502edccf-f650-430d-acfb-7f4f6c26b2d8", 00:11:23.994 "strip_size_kb": 0, 00:11:23.994 "state": "online", 00:11:23.994 "raid_level": "raid1", 00:11:23.994 "superblock": true, 00:11:23.994 "num_base_bdevs": 3, 00:11:23.994 "num_base_bdevs_discovered": 2, 00:11:23.994 "num_base_bdevs_operational": 2, 00:11:23.994 "base_bdevs_list": [ 00:11:23.994 { 00:11:23.994 "name": null, 00:11:23.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.994 "is_configured": false, 00:11:23.994 "data_offset": 0, 00:11:23.994 "data_size": 63488 00:11:23.994 }, 00:11:23.994 { 00:11:23.994 "name": "BaseBdev2", 00:11:23.994 "uuid": "b83e4e2a-dd4e-4623-9c3c-99f640d62cc8", 00:11:23.994 "is_configured": true, 00:11:23.994 "data_offset": 2048, 00:11:23.994 "data_size": 63488 00:11:23.994 }, 00:11:23.994 { 00:11:23.994 "name": "BaseBdev3", 00:11:23.994 "uuid": "2ca29b53-a433-42e3-9a30-5c5ba0b71a20", 00:11:23.994 "is_configured": true, 00:11:23.994 "data_offset": 2048, 00:11:23.994 "data_size": 63488 00:11:23.994 } 00:11:23.994 ] 00:11:23.994 }' 00:11:23.994 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.994 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.252 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.510 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.510 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.510 09:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:24.510 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.510 09:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.510 [2024-10-11 09:45:08.901077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.510 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.510 [2024-10-11 09:45:09.064320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.510 [2024-10-11 09:45:09.064509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.781 [2024-10-11 09:45:09.166402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.781 [2024-10-11 09:45:09.166561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.781 [2024-10-11 09:45:09.166613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.781 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 BaseBdev2 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 [ 00:11:24.782 { 00:11:24.782 "name": "BaseBdev2", 00:11:24.782 "aliases": [ 00:11:24.782 "a0e98a10-dc7a-4b10-9b59-da58590682d4" 00:11:24.782 ], 00:11:24.782 "product_name": "Malloc disk", 00:11:24.782 "block_size": 512, 00:11:24.782 "num_blocks": 65536, 00:11:24.782 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:24.782 "assigned_rate_limits": { 00:11:24.782 "rw_ios_per_sec": 0, 00:11:24.782 "rw_mbytes_per_sec": 0, 00:11:24.782 "r_mbytes_per_sec": 0, 00:11:24.782 "w_mbytes_per_sec": 0 00:11:24.782 }, 00:11:24.782 "claimed": false, 00:11:24.782 "zoned": false, 00:11:24.782 "supported_io_types": { 00:11:24.782 "read": true, 00:11:24.782 "write": true, 00:11:24.782 "unmap": true, 00:11:24.782 "flush": true, 00:11:24.782 "reset": true, 00:11:24.782 "nvme_admin": false, 00:11:24.782 "nvme_io": false, 00:11:24.782 "nvme_io_md": false, 00:11:24.782 "write_zeroes": true, 00:11:24.782 "zcopy": true, 00:11:24.782 "get_zone_info": false, 00:11:24.782 "zone_management": false, 00:11:24.782 "zone_append": false, 00:11:24.782 "compare": false, 00:11:24.782 "compare_and_write": false, 00:11:24.782 "abort": true, 00:11:24.782 "seek_hole": false, 00:11:24.782 "seek_data": false, 00:11:24.782 "copy": true, 00:11:24.782 "nvme_iov_md": false 00:11:24.782 }, 00:11:24.782 "memory_domains": [ 00:11:24.782 { 00:11:24.782 "dma_device_id": "system", 00:11:24.782 "dma_device_type": 1 00:11:24.782 }, 00:11:24.782 { 00:11:24.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.782 "dma_device_type": 2 00:11:24.782 } 00:11:24.782 ], 00:11:24.782 "driver_specific": {} 00:11:24.782 } 00:11:24.782 ] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 BaseBdev3 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 [ 00:11:24.782 { 00:11:24.782 "name": "BaseBdev3", 00:11:24.782 "aliases": [ 00:11:24.782 "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87" 00:11:24.782 ], 00:11:24.782 "product_name": "Malloc disk", 00:11:24.782 "block_size": 512, 00:11:24.782 "num_blocks": 65536, 00:11:24.782 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:24.782 "assigned_rate_limits": { 00:11:24.782 "rw_ios_per_sec": 0, 00:11:24.782 "rw_mbytes_per_sec": 0, 00:11:24.782 "r_mbytes_per_sec": 0, 00:11:24.782 "w_mbytes_per_sec": 0 00:11:24.782 }, 00:11:24.782 "claimed": false, 00:11:24.782 "zoned": false, 00:11:24.782 "supported_io_types": { 00:11:24.782 "read": true, 00:11:24.782 "write": true, 00:11:24.782 "unmap": true, 00:11:24.782 "flush": true, 00:11:24.782 "reset": true, 00:11:24.782 "nvme_admin": false, 00:11:24.782 "nvme_io": false, 00:11:24.782 "nvme_io_md": false, 00:11:24.782 "write_zeroes": true, 00:11:24.782 "zcopy": true, 00:11:24.782 "get_zone_info": false, 00:11:24.782 "zone_management": false, 00:11:24.782 "zone_append": false, 00:11:24.782 "compare": false, 00:11:24.782 "compare_and_write": false, 00:11:24.782 "abort": true, 00:11:24.782 "seek_hole": false, 00:11:24.782 "seek_data": false, 00:11:24.782 "copy": true, 00:11:24.782 "nvme_iov_md": false 00:11:24.782 }, 00:11:24.782 "memory_domains": [ 00:11:24.782 { 00:11:24.782 "dma_device_id": "system", 00:11:24.782 "dma_device_type": 1 00:11:24.782 }, 00:11:24.782 { 00:11:24.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.782 "dma_device_type": 2 00:11:24.782 } 00:11:24.782 ], 00:11:24.782 "driver_specific": {} 00:11:24.782 } 00:11:24.782 ] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.782 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.782 [2024-10-11 09:45:09.410540] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.782 [2024-10-11 09:45:09.410659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.782 [2024-10-11 09:45:09.410717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.054 [2024-10-11 09:45:09.412963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.054 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.055 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.055 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.055 "name": "Existed_Raid", 00:11:25.055 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:25.055 "strip_size_kb": 0, 00:11:25.055 "state": "configuring", 00:11:25.055 "raid_level": "raid1", 00:11:25.055 "superblock": true, 00:11:25.055 "num_base_bdevs": 3, 00:11:25.055 "num_base_bdevs_discovered": 2, 00:11:25.055 "num_base_bdevs_operational": 3, 00:11:25.055 "base_bdevs_list": [ 00:11:25.055 { 00:11:25.055 "name": "BaseBdev1", 00:11:25.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.055 "is_configured": false, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 0 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev2", 00:11:25.055 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 2048, 00:11:25.055 "data_size": 63488 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev3", 00:11:25.055 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 2048, 00:11:25.055 "data_size": 63488 00:11:25.055 } 00:11:25.055 ] 00:11:25.055 }' 00:11:25.055 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.055 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.312 [2024-10-11 09:45:09.893662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.312 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.313 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.313 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.313 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.313 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.313 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.571 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.571 "name": "Existed_Raid", 00:11:25.571 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:25.571 "strip_size_kb": 0, 00:11:25.571 "state": "configuring", 00:11:25.571 "raid_level": "raid1", 00:11:25.571 "superblock": true, 00:11:25.571 "num_base_bdevs": 3, 00:11:25.571 "num_base_bdevs_discovered": 1, 00:11:25.571 "num_base_bdevs_operational": 3, 00:11:25.571 "base_bdevs_list": [ 00:11:25.571 { 00:11:25.571 "name": "BaseBdev1", 00:11:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.571 "is_configured": false, 00:11:25.571 "data_offset": 0, 00:11:25.571 "data_size": 0 00:11:25.571 }, 00:11:25.571 { 00:11:25.571 "name": null, 00:11:25.571 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:25.571 "is_configured": false, 00:11:25.571 "data_offset": 0, 00:11:25.571 "data_size": 63488 00:11:25.571 }, 00:11:25.571 { 00:11:25.571 "name": "BaseBdev3", 00:11:25.571 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:25.571 "is_configured": true, 00:11:25.571 "data_offset": 2048, 00:11:25.571 "data_size": 63488 00:11:25.571 } 00:11:25.571 ] 00:11:25.571 }' 00:11:25.571 09:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.571 09:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.829 [2024-10-11 09:45:10.438454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.829 BaseBdev1 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.829 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.086 [ 00:11:26.086 { 00:11:26.086 "name": "BaseBdev1", 00:11:26.086 "aliases": [ 00:11:26.086 "9be579e2-6db8-4f72-9e9e-2fb2031a24d9" 00:11:26.086 ], 00:11:26.086 "product_name": "Malloc disk", 00:11:26.086 "block_size": 512, 00:11:26.086 "num_blocks": 65536, 00:11:26.086 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:26.086 "assigned_rate_limits": { 00:11:26.086 "rw_ios_per_sec": 0, 00:11:26.086 "rw_mbytes_per_sec": 0, 00:11:26.086 "r_mbytes_per_sec": 0, 00:11:26.086 "w_mbytes_per_sec": 0 00:11:26.086 }, 00:11:26.086 "claimed": true, 00:11:26.086 "claim_type": "exclusive_write", 00:11:26.086 "zoned": false, 00:11:26.086 "supported_io_types": { 00:11:26.086 "read": true, 00:11:26.086 "write": true, 00:11:26.086 "unmap": true, 00:11:26.086 "flush": true, 00:11:26.086 "reset": true, 00:11:26.086 "nvme_admin": false, 00:11:26.086 "nvme_io": false, 00:11:26.086 "nvme_io_md": false, 00:11:26.086 "write_zeroes": true, 00:11:26.086 "zcopy": true, 00:11:26.086 "get_zone_info": false, 00:11:26.086 "zone_management": false, 00:11:26.086 "zone_append": false, 00:11:26.086 "compare": false, 00:11:26.086 "compare_and_write": false, 00:11:26.086 "abort": true, 00:11:26.086 "seek_hole": false, 00:11:26.086 "seek_data": false, 00:11:26.086 "copy": true, 00:11:26.086 "nvme_iov_md": false 00:11:26.086 }, 00:11:26.086 "memory_domains": [ 00:11:26.086 { 00:11:26.086 "dma_device_id": "system", 00:11:26.086 "dma_device_type": 1 00:11:26.086 }, 00:11:26.086 { 00:11:26.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.086 "dma_device_type": 2 00:11:26.086 } 00:11:26.086 ], 00:11:26.086 "driver_specific": {} 00:11:26.086 } 00:11:26.086 ] 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.086 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.087 "name": "Existed_Raid", 00:11:26.087 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:26.087 "strip_size_kb": 0, 00:11:26.087 "state": "configuring", 00:11:26.087 "raid_level": "raid1", 00:11:26.087 "superblock": true, 00:11:26.087 "num_base_bdevs": 3, 00:11:26.087 "num_base_bdevs_discovered": 2, 00:11:26.087 "num_base_bdevs_operational": 3, 00:11:26.087 "base_bdevs_list": [ 00:11:26.087 { 00:11:26.087 "name": "BaseBdev1", 00:11:26.087 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:26.087 "is_configured": true, 00:11:26.087 "data_offset": 2048, 00:11:26.087 "data_size": 63488 00:11:26.087 }, 00:11:26.087 { 00:11:26.087 "name": null, 00:11:26.087 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:26.087 "is_configured": false, 00:11:26.087 "data_offset": 0, 00:11:26.087 "data_size": 63488 00:11:26.087 }, 00:11:26.087 { 00:11:26.087 "name": "BaseBdev3", 00:11:26.087 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:26.087 "is_configured": true, 00:11:26.087 "data_offset": 2048, 00:11:26.087 "data_size": 63488 00:11:26.087 } 00:11:26.087 ] 00:11:26.087 }' 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.087 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.345 [2024-10-11 09:45:10.949649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.345 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.603 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.603 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.603 "name": "Existed_Raid", 00:11:26.603 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:26.603 "strip_size_kb": 0, 00:11:26.603 "state": "configuring", 00:11:26.603 "raid_level": "raid1", 00:11:26.603 "superblock": true, 00:11:26.603 "num_base_bdevs": 3, 00:11:26.603 "num_base_bdevs_discovered": 1, 00:11:26.603 "num_base_bdevs_operational": 3, 00:11:26.603 "base_bdevs_list": [ 00:11:26.603 { 00:11:26.603 "name": "BaseBdev1", 00:11:26.603 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:26.603 "is_configured": true, 00:11:26.603 "data_offset": 2048, 00:11:26.603 "data_size": 63488 00:11:26.603 }, 00:11:26.603 { 00:11:26.603 "name": null, 00:11:26.603 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:26.603 "is_configured": false, 00:11:26.603 "data_offset": 0, 00:11:26.603 "data_size": 63488 00:11:26.603 }, 00:11:26.603 { 00:11:26.603 "name": null, 00:11:26.603 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:26.603 "is_configured": false, 00:11:26.603 "data_offset": 0, 00:11:26.603 "data_size": 63488 00:11:26.603 } 00:11:26.603 ] 00:11:26.603 }' 00:11:26.603 09:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.603 09:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.861 [2024-10-11 09:45:11.464879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.861 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.120 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.120 "name": "Existed_Raid", 00:11:27.120 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:27.120 "strip_size_kb": 0, 00:11:27.120 "state": "configuring", 00:11:27.120 "raid_level": "raid1", 00:11:27.120 "superblock": true, 00:11:27.120 "num_base_bdevs": 3, 00:11:27.120 "num_base_bdevs_discovered": 2, 00:11:27.120 "num_base_bdevs_operational": 3, 00:11:27.120 "base_bdevs_list": [ 00:11:27.120 { 00:11:27.120 "name": "BaseBdev1", 00:11:27.120 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:27.120 "is_configured": true, 00:11:27.120 "data_offset": 2048, 00:11:27.120 "data_size": 63488 00:11:27.120 }, 00:11:27.120 { 00:11:27.120 "name": null, 00:11:27.120 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:27.120 "is_configured": false, 00:11:27.120 "data_offset": 0, 00:11:27.120 "data_size": 63488 00:11:27.120 }, 00:11:27.120 { 00:11:27.120 "name": "BaseBdev3", 00:11:27.120 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:27.120 "is_configured": true, 00:11:27.120 "data_offset": 2048, 00:11:27.120 "data_size": 63488 00:11:27.120 } 00:11:27.120 ] 00:11:27.120 }' 00:11:27.120 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.120 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.379 09:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.379 [2024-10-11 09:45:11.948055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.637 "name": "Existed_Raid", 00:11:27.637 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:27.637 "strip_size_kb": 0, 00:11:27.637 "state": "configuring", 00:11:27.637 "raid_level": "raid1", 00:11:27.637 "superblock": true, 00:11:27.637 "num_base_bdevs": 3, 00:11:27.637 "num_base_bdevs_discovered": 1, 00:11:27.637 "num_base_bdevs_operational": 3, 00:11:27.637 "base_bdevs_list": [ 00:11:27.637 { 00:11:27.637 "name": null, 00:11:27.637 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:27.637 "is_configured": false, 00:11:27.637 "data_offset": 0, 00:11:27.637 "data_size": 63488 00:11:27.637 }, 00:11:27.637 { 00:11:27.637 "name": null, 00:11:27.637 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:27.637 "is_configured": false, 00:11:27.637 "data_offset": 0, 00:11:27.637 "data_size": 63488 00:11:27.637 }, 00:11:27.637 { 00:11:27.637 "name": "BaseBdev3", 00:11:27.637 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:27.637 "is_configured": true, 00:11:27.637 "data_offset": 2048, 00:11:27.637 "data_size": 63488 00:11:27.637 } 00:11:27.637 ] 00:11:27.637 }' 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.637 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.896 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.896 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.896 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.896 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.154 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.154 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:28.154 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:28.154 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.155 [2024-10-11 09:45:12.578540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.155 "name": "Existed_Raid", 00:11:28.155 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:28.155 "strip_size_kb": 0, 00:11:28.155 "state": "configuring", 00:11:28.155 "raid_level": "raid1", 00:11:28.155 "superblock": true, 00:11:28.155 "num_base_bdevs": 3, 00:11:28.155 "num_base_bdevs_discovered": 2, 00:11:28.155 "num_base_bdevs_operational": 3, 00:11:28.155 "base_bdevs_list": [ 00:11:28.155 { 00:11:28.155 "name": null, 00:11:28.155 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:28.155 "is_configured": false, 00:11:28.155 "data_offset": 0, 00:11:28.155 "data_size": 63488 00:11:28.155 }, 00:11:28.155 { 00:11:28.155 "name": "BaseBdev2", 00:11:28.155 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:28.155 "is_configured": true, 00:11:28.155 "data_offset": 2048, 00:11:28.155 "data_size": 63488 00:11:28.155 }, 00:11:28.155 { 00:11:28.155 "name": "BaseBdev3", 00:11:28.155 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:28.155 "is_configured": true, 00:11:28.155 "data_offset": 2048, 00:11:28.155 "data_size": 63488 00:11:28.155 } 00:11:28.155 ] 00:11:28.155 }' 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.155 09:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.413 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.413 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.413 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.413 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9be579e2-6db8-4f72-9e9e-2fb2031a24d9 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.672 [2024-10-11 09:45:13.165002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:28.672 [2024-10-11 09:45:13.165379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.672 [2024-10-11 09:45:13.165398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.672 [2024-10-11 09:45:13.165684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:28.672 [2024-10-11 09:45:13.165895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.672 [2024-10-11 09:45:13.165915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:28.672 [2024-10-11 09:45:13.166080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.672 NewBaseBdev 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.672 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.672 [ 00:11:28.672 { 00:11:28.672 "name": "NewBaseBdev", 00:11:28.672 "aliases": [ 00:11:28.672 "9be579e2-6db8-4f72-9e9e-2fb2031a24d9" 00:11:28.672 ], 00:11:28.672 "product_name": "Malloc disk", 00:11:28.672 "block_size": 512, 00:11:28.672 "num_blocks": 65536, 00:11:28.672 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:28.672 "assigned_rate_limits": { 00:11:28.672 "rw_ios_per_sec": 0, 00:11:28.672 "rw_mbytes_per_sec": 0, 00:11:28.672 "r_mbytes_per_sec": 0, 00:11:28.672 "w_mbytes_per_sec": 0 00:11:28.672 }, 00:11:28.672 "claimed": true, 00:11:28.672 "claim_type": "exclusive_write", 00:11:28.672 "zoned": false, 00:11:28.672 "supported_io_types": { 00:11:28.672 "read": true, 00:11:28.672 "write": true, 00:11:28.672 "unmap": true, 00:11:28.672 "flush": true, 00:11:28.672 "reset": true, 00:11:28.672 "nvme_admin": false, 00:11:28.672 "nvme_io": false, 00:11:28.672 "nvme_io_md": false, 00:11:28.672 "write_zeroes": true, 00:11:28.672 "zcopy": true, 00:11:28.672 "get_zone_info": false, 00:11:28.672 "zone_management": false, 00:11:28.672 "zone_append": false, 00:11:28.672 "compare": false, 00:11:28.672 "compare_and_write": false, 00:11:28.672 "abort": true, 00:11:28.672 "seek_hole": false, 00:11:28.672 "seek_data": false, 00:11:28.672 "copy": true, 00:11:28.673 "nvme_iov_md": false 00:11:28.673 }, 00:11:28.673 "memory_domains": [ 00:11:28.673 { 00:11:28.673 "dma_device_id": "system", 00:11:28.673 "dma_device_type": 1 00:11:28.673 }, 00:11:28.673 { 00:11:28.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.673 "dma_device_type": 2 00:11:28.673 } 00:11:28.673 ], 00:11:28.673 "driver_specific": {} 00:11:28.673 } 00:11:28.673 ] 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.673 "name": "Existed_Raid", 00:11:28.673 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:28.673 "strip_size_kb": 0, 00:11:28.673 "state": "online", 00:11:28.673 "raid_level": "raid1", 00:11:28.673 "superblock": true, 00:11:28.673 "num_base_bdevs": 3, 00:11:28.673 "num_base_bdevs_discovered": 3, 00:11:28.673 "num_base_bdevs_operational": 3, 00:11:28.673 "base_bdevs_list": [ 00:11:28.673 { 00:11:28.673 "name": "NewBaseBdev", 00:11:28.673 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:28.673 "is_configured": true, 00:11:28.673 "data_offset": 2048, 00:11:28.673 "data_size": 63488 00:11:28.673 }, 00:11:28.673 { 00:11:28.673 "name": "BaseBdev2", 00:11:28.673 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:28.673 "is_configured": true, 00:11:28.673 "data_offset": 2048, 00:11:28.673 "data_size": 63488 00:11:28.673 }, 00:11:28.673 { 00:11:28.673 "name": "BaseBdev3", 00:11:28.673 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:28.673 "is_configured": true, 00:11:28.673 "data_offset": 2048, 00:11:28.673 "data_size": 63488 00:11:28.673 } 00:11:28.673 ] 00:11:28.673 }' 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.673 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.240 [2024-10-11 09:45:13.716499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.240 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.240 "name": "Existed_Raid", 00:11:29.240 "aliases": [ 00:11:29.240 "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2" 00:11:29.240 ], 00:11:29.240 "product_name": "Raid Volume", 00:11:29.240 "block_size": 512, 00:11:29.240 "num_blocks": 63488, 00:11:29.240 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:29.240 "assigned_rate_limits": { 00:11:29.240 "rw_ios_per_sec": 0, 00:11:29.240 "rw_mbytes_per_sec": 0, 00:11:29.240 "r_mbytes_per_sec": 0, 00:11:29.241 "w_mbytes_per_sec": 0 00:11:29.241 }, 00:11:29.241 "claimed": false, 00:11:29.241 "zoned": false, 00:11:29.241 "supported_io_types": { 00:11:29.241 "read": true, 00:11:29.241 "write": true, 00:11:29.241 "unmap": false, 00:11:29.241 "flush": false, 00:11:29.241 "reset": true, 00:11:29.241 "nvme_admin": false, 00:11:29.241 "nvme_io": false, 00:11:29.241 "nvme_io_md": false, 00:11:29.241 "write_zeroes": true, 00:11:29.241 "zcopy": false, 00:11:29.241 "get_zone_info": false, 00:11:29.241 "zone_management": false, 00:11:29.241 "zone_append": false, 00:11:29.241 "compare": false, 00:11:29.241 "compare_and_write": false, 00:11:29.241 "abort": false, 00:11:29.241 "seek_hole": false, 00:11:29.241 "seek_data": false, 00:11:29.241 "copy": false, 00:11:29.241 "nvme_iov_md": false 00:11:29.241 }, 00:11:29.241 "memory_domains": [ 00:11:29.241 { 00:11:29.241 "dma_device_id": "system", 00:11:29.241 "dma_device_type": 1 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.241 "dma_device_type": 2 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "dma_device_id": "system", 00:11:29.241 "dma_device_type": 1 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.241 "dma_device_type": 2 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "dma_device_id": "system", 00:11:29.241 "dma_device_type": 1 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.241 "dma_device_type": 2 00:11:29.241 } 00:11:29.241 ], 00:11:29.241 "driver_specific": { 00:11:29.241 "raid": { 00:11:29.241 "uuid": "86cd51b7-1c20-440f-b9a7-4d4ad65a24d2", 00:11:29.241 "strip_size_kb": 0, 00:11:29.241 "state": "online", 00:11:29.241 "raid_level": "raid1", 00:11:29.241 "superblock": true, 00:11:29.241 "num_base_bdevs": 3, 00:11:29.241 "num_base_bdevs_discovered": 3, 00:11:29.241 "num_base_bdevs_operational": 3, 00:11:29.241 "base_bdevs_list": [ 00:11:29.241 { 00:11:29.241 "name": "NewBaseBdev", 00:11:29.241 "uuid": "9be579e2-6db8-4f72-9e9e-2fb2031a24d9", 00:11:29.241 "is_configured": true, 00:11:29.241 "data_offset": 2048, 00:11:29.241 "data_size": 63488 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "name": "BaseBdev2", 00:11:29.241 "uuid": "a0e98a10-dc7a-4b10-9b59-da58590682d4", 00:11:29.241 "is_configured": true, 00:11:29.241 "data_offset": 2048, 00:11:29.241 "data_size": 63488 00:11:29.241 }, 00:11:29.241 { 00:11:29.241 "name": "BaseBdev3", 00:11:29.241 "uuid": "33b0e386-c2c1-4a54-8e3b-c1ad1b54ed87", 00:11:29.241 "is_configured": true, 00:11:29.241 "data_offset": 2048, 00:11:29.241 "data_size": 63488 00:11:29.241 } 00:11:29.241 ] 00:11:29.241 } 00:11:29.241 } 00:11:29.241 }' 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:29.241 BaseBdev2 00:11:29.241 BaseBdev3' 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.241 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.500 09:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.500 [2024-10-11 09:45:14.015796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.500 [2024-10-11 09:45:14.015832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.500 [2024-10-11 09:45:14.015923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.500 [2024-10-11 09:45:14.016250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.500 [2024-10-11 09:45:14.016268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68475 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68475 ']' 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68475 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68475 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68475' 00:11:29.500 killing process with pid 68475 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68475 00:11:29.500 [2024-10-11 09:45:14.066438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.500 09:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68475 00:11:30.067 [2024-10-11 09:45:14.414386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.461 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:31.461 ************************************ 00:11:31.461 END TEST raid_state_function_test_sb 00:11:31.461 ************************************ 00:11:31.461 00:11:31.461 real 0m11.154s 00:11:31.461 user 0m17.665s 00:11:31.461 sys 0m1.821s 00:11:31.461 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.461 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.461 09:45:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:31.461 09:45:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:31.461 09:45:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.461 09:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.461 ************************************ 00:11:31.461 START TEST raid_superblock_test 00:11:31.461 ************************************ 00:11:31.461 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:11:31.461 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:31.461 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:31.461 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:31.461 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:31.461 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:31.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69101 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69101 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 69101 ']' 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.462 09:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:31.462 [2024-10-11 09:45:15.809585] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:31.462 [2024-10-11 09:45:15.809848] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69101 ] 00:11:31.462 [2024-10-11 09:45:15.964960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.723 [2024-10-11 09:45:16.105607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.723 [2024-10-11 09:45:16.348635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.723 [2024-10-11 09:45:16.348801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.290 malloc1 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.290 [2024-10-11 09:45:16.779299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.290 [2024-10-11 09:45:16.779388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.290 [2024-10-11 09:45:16.779420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:32.290 [2024-10-11 09:45:16.779432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.290 [2024-10-11 09:45:16.782021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.290 [2024-10-11 09:45:16.782066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.290 pt1 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.290 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 malloc2 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 [2024-10-11 09:45:16.843159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:32.291 [2024-10-11 09:45:16.843226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.291 [2024-10-11 09:45:16.843255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:32.291 [2024-10-11 09:45:16.843266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.291 [2024-10-11 09:45:16.845768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.291 [2024-10-11 09:45:16.845807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:32.291 pt2 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 malloc3 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.291 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 [2024-10-11 09:45:16.917842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:32.291 [2024-10-11 09:45:16.917978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.291 [2024-10-11 09:45:16.918011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:32.291 [2024-10-11 09:45:16.918022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.291 [2024-10-11 09:45:16.920422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.291 [2024-10-11 09:45:16.920464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:32.549 pt3 00:11:32.549 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.549 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.549 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.549 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:32.549 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.549 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.549 [2024-10-11 09:45:16.929878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.549 [2024-10-11 09:45:16.932028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.549 [2024-10-11 09:45:16.932179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:32.550 [2024-10-11 09:45:16.932385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:32.550 [2024-10-11 09:45:16.932407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.550 [2024-10-11 09:45:16.932705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:32.550 [2024-10-11 09:45:16.932921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:32.550 [2024-10-11 09:45:16.932936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:32.550 [2024-10-11 09:45:16.933135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.550 "name": "raid_bdev1", 00:11:32.550 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:32.550 "strip_size_kb": 0, 00:11:32.550 "state": "online", 00:11:32.550 "raid_level": "raid1", 00:11:32.550 "superblock": true, 00:11:32.550 "num_base_bdevs": 3, 00:11:32.550 "num_base_bdevs_discovered": 3, 00:11:32.550 "num_base_bdevs_operational": 3, 00:11:32.550 "base_bdevs_list": [ 00:11:32.550 { 00:11:32.550 "name": "pt1", 00:11:32.550 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.550 "is_configured": true, 00:11:32.550 "data_offset": 2048, 00:11:32.550 "data_size": 63488 00:11:32.550 }, 00:11:32.550 { 00:11:32.550 "name": "pt2", 00:11:32.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.550 "is_configured": true, 00:11:32.550 "data_offset": 2048, 00:11:32.550 "data_size": 63488 00:11:32.550 }, 00:11:32.550 { 00:11:32.550 "name": "pt3", 00:11:32.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.550 "is_configured": true, 00:11:32.550 "data_offset": 2048, 00:11:32.550 "data_size": 63488 00:11:32.550 } 00:11:32.550 ] 00:11:32.550 }' 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.550 09:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.808 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.808 [2024-10-11 09:45:17.429380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.067 "name": "raid_bdev1", 00:11:33.067 "aliases": [ 00:11:33.067 "9c07d535-eaab-4903-8651-b71edd893743" 00:11:33.067 ], 00:11:33.067 "product_name": "Raid Volume", 00:11:33.067 "block_size": 512, 00:11:33.067 "num_blocks": 63488, 00:11:33.067 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:33.067 "assigned_rate_limits": { 00:11:33.067 "rw_ios_per_sec": 0, 00:11:33.067 "rw_mbytes_per_sec": 0, 00:11:33.067 "r_mbytes_per_sec": 0, 00:11:33.067 "w_mbytes_per_sec": 0 00:11:33.067 }, 00:11:33.067 "claimed": false, 00:11:33.067 "zoned": false, 00:11:33.067 "supported_io_types": { 00:11:33.067 "read": true, 00:11:33.067 "write": true, 00:11:33.067 "unmap": false, 00:11:33.067 "flush": false, 00:11:33.067 "reset": true, 00:11:33.067 "nvme_admin": false, 00:11:33.067 "nvme_io": false, 00:11:33.067 "nvme_io_md": false, 00:11:33.067 "write_zeroes": true, 00:11:33.067 "zcopy": false, 00:11:33.067 "get_zone_info": false, 00:11:33.067 "zone_management": false, 00:11:33.067 "zone_append": false, 00:11:33.067 "compare": false, 00:11:33.067 "compare_and_write": false, 00:11:33.067 "abort": false, 00:11:33.067 "seek_hole": false, 00:11:33.067 "seek_data": false, 00:11:33.067 "copy": false, 00:11:33.067 "nvme_iov_md": false 00:11:33.067 }, 00:11:33.067 "memory_domains": [ 00:11:33.067 { 00:11:33.067 "dma_device_id": "system", 00:11:33.067 "dma_device_type": 1 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.067 "dma_device_type": 2 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "dma_device_id": "system", 00:11:33.067 "dma_device_type": 1 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.067 "dma_device_type": 2 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "dma_device_id": "system", 00:11:33.067 "dma_device_type": 1 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.067 "dma_device_type": 2 00:11:33.067 } 00:11:33.067 ], 00:11:33.067 "driver_specific": { 00:11:33.067 "raid": { 00:11:33.067 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:33.067 "strip_size_kb": 0, 00:11:33.067 "state": "online", 00:11:33.067 "raid_level": "raid1", 00:11:33.067 "superblock": true, 00:11:33.067 "num_base_bdevs": 3, 00:11:33.067 "num_base_bdevs_discovered": 3, 00:11:33.067 "num_base_bdevs_operational": 3, 00:11:33.067 "base_bdevs_list": [ 00:11:33.067 { 00:11:33.067 "name": "pt1", 00:11:33.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.067 "is_configured": true, 00:11:33.067 "data_offset": 2048, 00:11:33.067 "data_size": 63488 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "name": "pt2", 00:11:33.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.067 "is_configured": true, 00:11:33.067 "data_offset": 2048, 00:11:33.067 "data_size": 63488 00:11:33.067 }, 00:11:33.067 { 00:11:33.067 "name": "pt3", 00:11:33.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.067 "is_configured": true, 00:11:33.067 "data_offset": 2048, 00:11:33.067 "data_size": 63488 00:11:33.067 } 00:11:33.067 ] 00:11:33.067 } 00:11:33.067 } 00:11:33.067 }' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:33.067 pt2 00:11:33.067 pt3' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.067 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 [2024-10-11 09:45:17.709138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c07d535-eaab-4903-8651-b71edd893743 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c07d535-eaab-4903-8651-b71edd893743 ']' 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 [2024-10-11 09:45:17.740718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.326 [2024-10-11 09:45:17.740813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.326 [2024-10-11 09:45:17.740947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.326 [2024-10-11 09:45:17.741069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.326 [2024-10-11 09:45:17.741085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.326 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.326 [2024-10-11 09:45:17.868544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:33.327 [2024-10-11 09:45:17.870727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:33.327 [2024-10-11 09:45:17.870811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:33.327 [2024-10-11 09:45:17.870873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:33.327 [2024-10-11 09:45:17.870933] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:33.327 [2024-10-11 09:45:17.870956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:33.327 [2024-10-11 09:45:17.870976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.327 [2024-10-11 09:45:17.870987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:33.327 request: 00:11:33.327 { 00:11:33.327 "name": "raid_bdev1", 00:11:33.327 "raid_level": "raid1", 00:11:33.327 "base_bdevs": [ 00:11:33.327 "malloc1", 00:11:33.327 "malloc2", 00:11:33.327 "malloc3" 00:11:33.327 ], 00:11:33.327 "superblock": false, 00:11:33.327 "method": "bdev_raid_create", 00:11:33.327 "req_id": 1 00:11:33.327 } 00:11:33.327 Got JSON-RPC error response 00:11:33.327 response: 00:11:33.327 { 00:11:33.327 "code": -17, 00:11:33.327 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:33.327 } 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.327 [2024-10-11 09:45:17.916396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.327 [2024-10-11 09:45:17.916471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.327 [2024-10-11 09:45:17.916504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.327 [2024-10-11 09:45:17.916515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.327 [2024-10-11 09:45:17.919116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.327 [2024-10-11 09:45:17.919157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.327 [2024-10-11 09:45:17.919256] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:33.327 [2024-10-11 09:45:17.919317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.327 pt1 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.327 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.327 "name": "raid_bdev1", 00:11:33.327 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:33.327 "strip_size_kb": 0, 00:11:33.327 "state": "configuring", 00:11:33.327 "raid_level": "raid1", 00:11:33.327 "superblock": true, 00:11:33.327 "num_base_bdevs": 3, 00:11:33.327 "num_base_bdevs_discovered": 1, 00:11:33.327 "num_base_bdevs_operational": 3, 00:11:33.327 "base_bdevs_list": [ 00:11:33.327 { 00:11:33.327 "name": "pt1", 00:11:33.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.327 "is_configured": true, 00:11:33.327 "data_offset": 2048, 00:11:33.327 "data_size": 63488 00:11:33.327 }, 00:11:33.327 { 00:11:33.327 "name": null, 00:11:33.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.327 "is_configured": false, 00:11:33.327 "data_offset": 2048, 00:11:33.327 "data_size": 63488 00:11:33.327 }, 00:11:33.327 { 00:11:33.327 "name": null, 00:11:33.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.327 "is_configured": false, 00:11:33.327 "data_offset": 2048, 00:11:33.327 "data_size": 63488 00:11:33.327 } 00:11:33.327 ] 00:11:33.327 }' 00:11:33.585 09:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.585 09:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 [2024-10-11 09:45:18.335877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.842 [2024-10-11 09:45:18.335999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.842 [2024-10-11 09:45:18.336055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:33.842 [2024-10-11 09:45:18.336092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.842 [2024-10-11 09:45:18.336649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.842 [2024-10-11 09:45:18.336718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.842 [2024-10-11 09:45:18.336879] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.842 [2024-10-11 09:45:18.336938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.842 pt2 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 [2024-10-11 09:45:18.343880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.842 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.843 "name": "raid_bdev1", 00:11:33.843 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:33.843 "strip_size_kb": 0, 00:11:33.843 "state": "configuring", 00:11:33.843 "raid_level": "raid1", 00:11:33.843 "superblock": true, 00:11:33.843 "num_base_bdevs": 3, 00:11:33.843 "num_base_bdevs_discovered": 1, 00:11:33.843 "num_base_bdevs_operational": 3, 00:11:33.843 "base_bdevs_list": [ 00:11:33.843 { 00:11:33.843 "name": "pt1", 00:11:33.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.843 "is_configured": true, 00:11:33.843 "data_offset": 2048, 00:11:33.843 "data_size": 63488 00:11:33.843 }, 00:11:33.843 { 00:11:33.843 "name": null, 00:11:33.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.843 "is_configured": false, 00:11:33.843 "data_offset": 0, 00:11:33.843 "data_size": 63488 00:11:33.843 }, 00:11:33.843 { 00:11:33.843 "name": null, 00:11:33.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.843 "is_configured": false, 00:11:33.843 "data_offset": 2048, 00:11:33.843 "data_size": 63488 00:11:33.843 } 00:11:33.843 ] 00:11:33.843 }' 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.843 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.408 [2024-10-11 09:45:18.779207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.408 [2024-10-11 09:45:18.779288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.408 [2024-10-11 09:45:18.779310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:34.408 [2024-10-11 09:45:18.779324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.408 [2024-10-11 09:45:18.779882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.408 [2024-10-11 09:45:18.779921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.408 [2024-10-11 09:45:18.780027] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.408 [2024-10-11 09:45:18.780075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.408 pt2 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.408 [2024-10-11 09:45:18.787195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:34.408 [2024-10-11 09:45:18.787260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.408 [2024-10-11 09:45:18.787286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.408 [2024-10-11 09:45:18.787302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.408 [2024-10-11 09:45:18.787838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.408 [2024-10-11 09:45:18.787877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:34.408 [2024-10-11 09:45:18.787971] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:34.408 [2024-10-11 09:45:18.788001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:34.408 [2024-10-11 09:45:18.788152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.408 [2024-10-11 09:45:18.788173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.408 [2024-10-11 09:45:18.788456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.408 [2024-10-11 09:45:18.788653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.408 [2024-10-11 09:45:18.788664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:34.408 [2024-10-11 09:45:18.788845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.408 pt3 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.408 "name": "raid_bdev1", 00:11:34.408 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:34.408 "strip_size_kb": 0, 00:11:34.408 "state": "online", 00:11:34.408 "raid_level": "raid1", 00:11:34.408 "superblock": true, 00:11:34.408 "num_base_bdevs": 3, 00:11:34.408 "num_base_bdevs_discovered": 3, 00:11:34.408 "num_base_bdevs_operational": 3, 00:11:34.408 "base_bdevs_list": [ 00:11:34.408 { 00:11:34.408 "name": "pt1", 00:11:34.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.408 "is_configured": true, 00:11:34.408 "data_offset": 2048, 00:11:34.408 "data_size": 63488 00:11:34.408 }, 00:11:34.408 { 00:11:34.408 "name": "pt2", 00:11:34.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.408 "is_configured": true, 00:11:34.408 "data_offset": 2048, 00:11:34.408 "data_size": 63488 00:11:34.408 }, 00:11:34.408 { 00:11:34.408 "name": "pt3", 00:11:34.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.408 "is_configured": true, 00:11:34.408 "data_offset": 2048, 00:11:34.408 "data_size": 63488 00:11:34.408 } 00:11:34.408 ] 00:11:34.408 }' 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.408 09:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.668 [2024-10-11 09:45:19.210920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.668 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.668 "name": "raid_bdev1", 00:11:34.668 "aliases": [ 00:11:34.668 "9c07d535-eaab-4903-8651-b71edd893743" 00:11:34.668 ], 00:11:34.668 "product_name": "Raid Volume", 00:11:34.668 "block_size": 512, 00:11:34.668 "num_blocks": 63488, 00:11:34.668 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:34.668 "assigned_rate_limits": { 00:11:34.668 "rw_ios_per_sec": 0, 00:11:34.668 "rw_mbytes_per_sec": 0, 00:11:34.668 "r_mbytes_per_sec": 0, 00:11:34.668 "w_mbytes_per_sec": 0 00:11:34.668 }, 00:11:34.668 "claimed": false, 00:11:34.668 "zoned": false, 00:11:34.668 "supported_io_types": { 00:11:34.668 "read": true, 00:11:34.668 "write": true, 00:11:34.668 "unmap": false, 00:11:34.668 "flush": false, 00:11:34.668 "reset": true, 00:11:34.668 "nvme_admin": false, 00:11:34.668 "nvme_io": false, 00:11:34.668 "nvme_io_md": false, 00:11:34.668 "write_zeroes": true, 00:11:34.668 "zcopy": false, 00:11:34.668 "get_zone_info": false, 00:11:34.668 "zone_management": false, 00:11:34.668 "zone_append": false, 00:11:34.668 "compare": false, 00:11:34.668 "compare_and_write": false, 00:11:34.668 "abort": false, 00:11:34.668 "seek_hole": false, 00:11:34.668 "seek_data": false, 00:11:34.668 "copy": false, 00:11:34.668 "nvme_iov_md": false 00:11:34.668 }, 00:11:34.668 "memory_domains": [ 00:11:34.668 { 00:11:34.668 "dma_device_id": "system", 00:11:34.668 "dma_device_type": 1 00:11:34.668 }, 00:11:34.668 { 00:11:34.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.668 "dma_device_type": 2 00:11:34.668 }, 00:11:34.668 { 00:11:34.668 "dma_device_id": "system", 00:11:34.668 "dma_device_type": 1 00:11:34.668 }, 00:11:34.669 { 00:11:34.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.669 "dma_device_type": 2 00:11:34.669 }, 00:11:34.669 { 00:11:34.669 "dma_device_id": "system", 00:11:34.669 "dma_device_type": 1 00:11:34.669 }, 00:11:34.669 { 00:11:34.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.669 "dma_device_type": 2 00:11:34.669 } 00:11:34.669 ], 00:11:34.669 "driver_specific": { 00:11:34.669 "raid": { 00:11:34.669 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:34.669 "strip_size_kb": 0, 00:11:34.669 "state": "online", 00:11:34.669 "raid_level": "raid1", 00:11:34.669 "superblock": true, 00:11:34.669 "num_base_bdevs": 3, 00:11:34.669 "num_base_bdevs_discovered": 3, 00:11:34.669 "num_base_bdevs_operational": 3, 00:11:34.669 "base_bdevs_list": [ 00:11:34.669 { 00:11:34.669 "name": "pt1", 00:11:34.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.669 "is_configured": true, 00:11:34.669 "data_offset": 2048, 00:11:34.669 "data_size": 63488 00:11:34.669 }, 00:11:34.669 { 00:11:34.669 "name": "pt2", 00:11:34.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.669 "is_configured": true, 00:11:34.669 "data_offset": 2048, 00:11:34.669 "data_size": 63488 00:11:34.669 }, 00:11:34.669 { 00:11:34.669 "name": "pt3", 00:11:34.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.669 "is_configured": true, 00:11:34.669 "data_offset": 2048, 00:11:34.669 "data_size": 63488 00:11:34.669 } 00:11:34.669 ] 00:11:34.669 } 00:11:34.669 } 00:11:34.669 }' 00:11:34.669 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.669 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.669 pt2 00:11:34.669 pt3' 00:11:34.669 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:34.927 [2024-10-11 09:45:19.466463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c07d535-eaab-4903-8651-b71edd893743 '!=' 9c07d535-eaab-4903-8651-b71edd893743 ']' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.927 [2024-10-11 09:45:19.502123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.927 "name": "raid_bdev1", 00:11:34.927 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:34.927 "strip_size_kb": 0, 00:11:34.927 "state": "online", 00:11:34.927 "raid_level": "raid1", 00:11:34.927 "superblock": true, 00:11:34.927 "num_base_bdevs": 3, 00:11:34.927 "num_base_bdevs_discovered": 2, 00:11:34.927 "num_base_bdevs_operational": 2, 00:11:34.927 "base_bdevs_list": [ 00:11:34.927 { 00:11:34.927 "name": null, 00:11:34.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.927 "is_configured": false, 00:11:34.927 "data_offset": 0, 00:11:34.927 "data_size": 63488 00:11:34.927 }, 00:11:34.927 { 00:11:34.927 "name": "pt2", 00:11:34.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.927 "is_configured": true, 00:11:34.927 "data_offset": 2048, 00:11:34.927 "data_size": 63488 00:11:34.927 }, 00:11:34.927 { 00:11:34.927 "name": "pt3", 00:11:34.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.927 "is_configured": true, 00:11:34.927 "data_offset": 2048, 00:11:34.927 "data_size": 63488 00:11:34.927 } 00:11:34.927 ] 00:11:34.927 }' 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.927 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.493 [2024-10-11 09:45:19.973293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.493 [2024-10-11 09:45:19.973401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.493 [2024-10-11 09:45:19.973501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.493 [2024-10-11 09:45:19.973571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.493 [2024-10-11 09:45:19.973589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.493 09:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.494 [2024-10-11 09:45:20.041133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.494 [2024-10-11 09:45:20.041196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.494 [2024-10-11 09:45:20.041217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:35.494 [2024-10-11 09:45:20.041230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.494 [2024-10-11 09:45:20.043789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.494 [2024-10-11 09:45:20.043881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.494 [2024-10-11 09:45:20.043984] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.494 [2024-10-11 09:45:20.044040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.494 pt2 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.494 "name": "raid_bdev1", 00:11:35.494 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:35.494 "strip_size_kb": 0, 00:11:35.494 "state": "configuring", 00:11:35.494 "raid_level": "raid1", 00:11:35.494 "superblock": true, 00:11:35.494 "num_base_bdevs": 3, 00:11:35.494 "num_base_bdevs_discovered": 1, 00:11:35.494 "num_base_bdevs_operational": 2, 00:11:35.494 "base_bdevs_list": [ 00:11:35.494 { 00:11:35.494 "name": null, 00:11:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.494 "is_configured": false, 00:11:35.494 "data_offset": 2048, 00:11:35.494 "data_size": 63488 00:11:35.494 }, 00:11:35.494 { 00:11:35.494 "name": "pt2", 00:11:35.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.494 "is_configured": true, 00:11:35.494 "data_offset": 2048, 00:11:35.494 "data_size": 63488 00:11:35.494 }, 00:11:35.494 { 00:11:35.494 "name": null, 00:11:35.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.494 "is_configured": false, 00:11:35.494 "data_offset": 2048, 00:11:35.494 "data_size": 63488 00:11:35.494 } 00:11:35.494 ] 00:11:35.494 }' 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.494 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.060 [2024-10-11 09:45:20.472452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.060 [2024-10-11 09:45:20.472589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.060 [2024-10-11 09:45:20.472645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:36.060 [2024-10-11 09:45:20.472685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.060 [2024-10-11 09:45:20.473269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.060 [2024-10-11 09:45:20.473350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.060 [2024-10-11 09:45:20.473481] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.060 [2024-10-11 09:45:20.473551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.060 [2024-10-11 09:45:20.473714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.060 [2024-10-11 09:45:20.473775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.060 [2024-10-11 09:45:20.474095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:36.060 [2024-10-11 09:45:20.474333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.060 [2024-10-11 09:45:20.474377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:36.060 [2024-10-11 09:45:20.474591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.060 pt3 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.060 "name": "raid_bdev1", 00:11:36.060 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:36.060 "strip_size_kb": 0, 00:11:36.060 "state": "online", 00:11:36.060 "raid_level": "raid1", 00:11:36.060 "superblock": true, 00:11:36.060 "num_base_bdevs": 3, 00:11:36.060 "num_base_bdevs_discovered": 2, 00:11:36.060 "num_base_bdevs_operational": 2, 00:11:36.060 "base_bdevs_list": [ 00:11:36.060 { 00:11:36.060 "name": null, 00:11:36.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.060 "is_configured": false, 00:11:36.060 "data_offset": 2048, 00:11:36.060 "data_size": 63488 00:11:36.060 }, 00:11:36.060 { 00:11:36.060 "name": "pt2", 00:11:36.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.060 "is_configured": true, 00:11:36.060 "data_offset": 2048, 00:11:36.060 "data_size": 63488 00:11:36.060 }, 00:11:36.060 { 00:11:36.060 "name": "pt3", 00:11:36.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.060 "is_configured": true, 00:11:36.060 "data_offset": 2048, 00:11:36.060 "data_size": 63488 00:11:36.060 } 00:11:36.060 ] 00:11:36.060 }' 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.060 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.319 [2024-10-11 09:45:20.927869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.319 [2024-10-11 09:45:20.927915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.319 [2024-10-11 09:45:20.928022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.319 [2024-10-11 09:45:20.928099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.319 [2024-10-11 09:45:20.928111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.319 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.578 [2024-10-11 09:45:20.979857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.578 [2024-10-11 09:45:20.979939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.578 [2024-10-11 09:45:20.979965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:36.578 [2024-10-11 09:45:20.979977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.578 [2024-10-11 09:45:20.982549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.578 [2024-10-11 09:45:20.982593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.578 [2024-10-11 09:45:20.982694] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.578 [2024-10-11 09:45:20.982773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.578 [2024-10-11 09:45:20.982939] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:36.578 [2024-10-11 09:45:20.982956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.578 [2024-10-11 09:45:20.982975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:36.578 [2024-10-11 09:45:20.983034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.578 pt1 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.578 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.578 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.578 "name": "raid_bdev1", 00:11:36.578 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:36.578 "strip_size_kb": 0, 00:11:36.578 "state": "configuring", 00:11:36.578 "raid_level": "raid1", 00:11:36.579 "superblock": true, 00:11:36.579 "num_base_bdevs": 3, 00:11:36.579 "num_base_bdevs_discovered": 1, 00:11:36.579 "num_base_bdevs_operational": 2, 00:11:36.579 "base_bdevs_list": [ 00:11:36.579 { 00:11:36.579 "name": null, 00:11:36.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.579 "is_configured": false, 00:11:36.579 "data_offset": 2048, 00:11:36.579 "data_size": 63488 00:11:36.579 }, 00:11:36.579 { 00:11:36.579 "name": "pt2", 00:11:36.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.579 "is_configured": true, 00:11:36.579 "data_offset": 2048, 00:11:36.579 "data_size": 63488 00:11:36.579 }, 00:11:36.579 { 00:11:36.579 "name": null, 00:11:36.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.579 "is_configured": false, 00:11:36.579 "data_offset": 2048, 00:11:36.579 "data_size": 63488 00:11:36.579 } 00:11:36.579 ] 00:11:36.579 }' 00:11:36.579 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.579 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.837 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 [2024-10-11 09:45:21.463345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.837 [2024-10-11 09:45:21.463465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.837 [2024-10-11 09:45:21.463521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:36.837 [2024-10-11 09:45:21.463557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.837 [2024-10-11 09:45:21.464156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.837 [2024-10-11 09:45:21.464223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.837 [2024-10-11 09:45:21.464392] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.837 [2024-10-11 09:45:21.464471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.837 [2024-10-11 09:45:21.464628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:36.837 [2024-10-11 09:45:21.464639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.837 [2024-10-11 09:45:21.464992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:36.837 [2024-10-11 09:45:21.465202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:36.837 [2024-10-11 09:45:21.465217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:36.837 [2024-10-11 09:45:21.465377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.096 pt3 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.096 "name": "raid_bdev1", 00:11:37.096 "uuid": "9c07d535-eaab-4903-8651-b71edd893743", 00:11:37.096 "strip_size_kb": 0, 00:11:37.096 "state": "online", 00:11:37.096 "raid_level": "raid1", 00:11:37.096 "superblock": true, 00:11:37.096 "num_base_bdevs": 3, 00:11:37.096 "num_base_bdevs_discovered": 2, 00:11:37.096 "num_base_bdevs_operational": 2, 00:11:37.096 "base_bdevs_list": [ 00:11:37.096 { 00:11:37.096 "name": null, 00:11:37.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.096 "is_configured": false, 00:11:37.096 "data_offset": 2048, 00:11:37.096 "data_size": 63488 00:11:37.096 }, 00:11:37.096 { 00:11:37.096 "name": "pt2", 00:11:37.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.096 "is_configured": true, 00:11:37.096 "data_offset": 2048, 00:11:37.096 "data_size": 63488 00:11:37.096 }, 00:11:37.096 { 00:11:37.096 "name": "pt3", 00:11:37.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.096 "is_configured": true, 00:11:37.096 "data_offset": 2048, 00:11:37.096 "data_size": 63488 00:11:37.096 } 00:11:37.096 ] 00:11:37.096 }' 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.096 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.354 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:37.354 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.354 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.354 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:37.354 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.612 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:37.612 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:37.612 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.612 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.612 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.612 [2024-10-11 09:45:22.002891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9c07d535-eaab-4903-8651-b71edd893743 '!=' 9c07d535-eaab-4903-8651-b71edd893743 ']' 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69101 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 69101 ']' 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 69101 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69101 00:11:37.612 killing process with pid 69101 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69101' 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 69101 00:11:37.612 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 69101 00:11:37.612 [2024-10-11 09:45:22.072723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.612 [2024-10-11 09:45:22.072857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.612 [2024-10-11 09:45:22.073004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.612 [2024-10-11 09:45:22.073026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:37.871 [2024-10-11 09:45:22.425758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.248 ************************************ 00:11:39.248 END TEST raid_superblock_test 00:11:39.248 ************************************ 00:11:39.248 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:39.248 00:11:39.248 real 0m7.982s 00:11:39.248 user 0m12.414s 00:11:39.248 sys 0m1.230s 00:11:39.248 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.248 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 09:45:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:39.248 09:45:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:39.248 09:45:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.248 09:45:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 ************************************ 00:11:39.248 START TEST raid_read_error_test 00:11:39.248 ************************************ 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jboP6m9Yxm 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69552 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69552 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69552 ']' 00:11:39.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:39.248 09:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.248 [2024-10-11 09:45:23.833808] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:39.248 [2024-10-11 09:45:23.833944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69552 ] 00:11:39.506 [2024-10-11 09:45:24.000477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.765 [2024-10-11 09:45:24.168934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.024 [2024-10-11 09:45:24.429424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.024 [2024-10-11 09:45:24.429570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.283 BaseBdev1_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.283 true 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.283 [2024-10-11 09:45:24.808178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:40.283 [2024-10-11 09:45:24.808298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.283 [2024-10-11 09:45:24.808329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:40.283 [2024-10-11 09:45:24.808342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.283 [2024-10-11 09:45:24.810794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.283 [2024-10-11 09:45:24.810837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:40.283 BaseBdev1 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.283 BaseBdev2_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.283 true 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.283 [2024-10-11 09:45:24.877606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:40.283 [2024-10-11 09:45:24.877671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.283 [2024-10-11 09:45:24.877692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:40.283 [2024-10-11 09:45:24.877704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.283 [2024-10-11 09:45:24.880198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.283 [2024-10-11 09:45:24.880247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:40.283 BaseBdev2 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.283 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.542 BaseBdev3_malloc 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.543 true 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.543 [2024-10-11 09:45:24.962180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:40.543 [2024-10-11 09:45:24.962244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.543 [2024-10-11 09:45:24.962267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:40.543 [2024-10-11 09:45:24.962280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.543 [2024-10-11 09:45:24.964782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.543 [2024-10-11 09:45:24.964825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:40.543 BaseBdev3 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.543 [2024-10-11 09:45:24.974224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.543 [2024-10-11 09:45:24.976348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.543 [2024-10-11 09:45:24.976509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.543 [2024-10-11 09:45:24.976786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.543 [2024-10-11 09:45:24.976803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.543 [2024-10-11 09:45:24.977113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:40.543 [2024-10-11 09:45:24.977322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.543 [2024-10-11 09:45:24.977337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:40.543 [2024-10-11 09:45:24.977512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.543 09:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.543 09:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.543 "name": "raid_bdev1", 00:11:40.543 "uuid": "86ef5f0a-d777-4a8e-ac62-97c22a886e69", 00:11:40.543 "strip_size_kb": 0, 00:11:40.543 "state": "online", 00:11:40.543 "raid_level": "raid1", 00:11:40.543 "superblock": true, 00:11:40.543 "num_base_bdevs": 3, 00:11:40.543 "num_base_bdevs_discovered": 3, 00:11:40.543 "num_base_bdevs_operational": 3, 00:11:40.543 "base_bdevs_list": [ 00:11:40.543 { 00:11:40.543 "name": "BaseBdev1", 00:11:40.543 "uuid": "47795bfa-6be0-5c8b-937a-1601a05c3d81", 00:11:40.543 "is_configured": true, 00:11:40.543 "data_offset": 2048, 00:11:40.543 "data_size": 63488 00:11:40.543 }, 00:11:40.543 { 00:11:40.543 "name": "BaseBdev2", 00:11:40.543 "uuid": "2da3dec3-d87a-5597-8a94-eacb6b06f747", 00:11:40.543 "is_configured": true, 00:11:40.543 "data_offset": 2048, 00:11:40.543 "data_size": 63488 00:11:40.543 }, 00:11:40.543 { 00:11:40.543 "name": "BaseBdev3", 00:11:40.543 "uuid": "4d51eabe-2a9f-5c74-9199-5893a10d9b76", 00:11:40.543 "is_configured": true, 00:11:40.543 "data_offset": 2048, 00:11:40.543 "data_size": 63488 00:11:40.543 } 00:11:40.543 ] 00:11:40.543 }' 00:11:40.543 09:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.543 09:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.109 09:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.109 09:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.109 [2024-10-11 09:45:25.538932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.065 "name": "raid_bdev1", 00:11:42.065 "uuid": "86ef5f0a-d777-4a8e-ac62-97c22a886e69", 00:11:42.065 "strip_size_kb": 0, 00:11:42.065 "state": "online", 00:11:42.065 "raid_level": "raid1", 00:11:42.065 "superblock": true, 00:11:42.065 "num_base_bdevs": 3, 00:11:42.065 "num_base_bdevs_discovered": 3, 00:11:42.065 "num_base_bdevs_operational": 3, 00:11:42.065 "base_bdevs_list": [ 00:11:42.065 { 00:11:42.065 "name": "BaseBdev1", 00:11:42.065 "uuid": "47795bfa-6be0-5c8b-937a-1601a05c3d81", 00:11:42.065 "is_configured": true, 00:11:42.065 "data_offset": 2048, 00:11:42.065 "data_size": 63488 00:11:42.065 }, 00:11:42.065 { 00:11:42.065 "name": "BaseBdev2", 00:11:42.065 "uuid": "2da3dec3-d87a-5597-8a94-eacb6b06f747", 00:11:42.065 "is_configured": true, 00:11:42.065 "data_offset": 2048, 00:11:42.065 "data_size": 63488 00:11:42.065 }, 00:11:42.065 { 00:11:42.065 "name": "BaseBdev3", 00:11:42.065 "uuid": "4d51eabe-2a9f-5c74-9199-5893a10d9b76", 00:11:42.065 "is_configured": true, 00:11:42.065 "data_offset": 2048, 00:11:42.065 "data_size": 63488 00:11:42.065 } 00:11:42.065 ] 00:11:42.065 }' 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.065 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.324 [2024-10-11 09:45:26.893955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.324 [2024-10-11 09:45:26.893992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.324 [2024-10-11 09:45:26.897200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.324 [2024-10-11 09:45:26.897311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.324 [2024-10-11 09:45:26.897439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.324 [2024-10-11 09:45:26.897451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:42.324 { 00:11:42.324 "results": [ 00:11:42.324 { 00:11:42.324 "job": "raid_bdev1", 00:11:42.324 "core_mask": "0x1", 00:11:42.324 "workload": "randrw", 00:11:42.324 "percentage": 50, 00:11:42.324 "status": "finished", 00:11:42.324 "queue_depth": 1, 00:11:42.324 "io_size": 131072, 00:11:42.324 "runtime": 1.355485, 00:11:42.324 "iops": 11223.289081030038, 00:11:42.324 "mibps": 1402.9111351287547, 00:11:42.324 "io_failed": 0, 00:11:42.324 "io_timeout": 0, 00:11:42.324 "avg_latency_us": 85.81324648506492, 00:11:42.324 "min_latency_us": 29.289082969432314, 00:11:42.324 "max_latency_us": 1845.8829694323144 00:11:42.324 } 00:11:42.324 ], 00:11:42.324 "core_count": 1 00:11:42.324 } 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69552 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69552 ']' 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69552 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69552 00:11:42.324 killing process with pid 69552 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69552' 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69552 00:11:42.324 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69552 00:11:42.324 [2024-10-11 09:45:26.926138] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.582 [2024-10-11 09:45:27.189679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jboP6m9Yxm 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:43.960 ************************************ 00:11:43.960 END TEST raid_read_error_test 00:11:43.960 ************************************ 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:43.960 00:11:43.960 real 0m4.816s 00:11:43.960 user 0m5.760s 00:11:43.960 sys 0m0.527s 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.960 09:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.219 09:45:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:44.219 09:45:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:44.219 09:45:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.219 09:45:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.219 ************************************ 00:11:44.219 START TEST raid_write_error_test 00:11:44.219 ************************************ 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hFeDJGjU3C 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69700 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69700 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69700 ']' 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.219 09:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.219 [2024-10-11 09:45:28.709430] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:44.219 [2024-10-11 09:45:28.709558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69700 ] 00:11:44.477 [2024-10-11 09:45:28.878391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.477 [2024-10-11 09:45:29.020255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.736 [2024-10-11 09:45:29.291410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.736 [2024-10-11 09:45:29.291447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.304 BaseBdev1_malloc 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.304 true 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.304 [2024-10-11 09:45:29.733626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.304 [2024-10-11 09:45:29.733801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.304 [2024-10-11 09:45:29.733835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.304 [2024-10-11 09:45:29.733849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.304 [2024-10-11 09:45:29.736364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.304 [2024-10-11 09:45:29.736410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.304 BaseBdev1 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.304 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.304 BaseBdev2_malloc 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 true 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 [2024-10-11 09:45:29.798438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.305 [2024-10-11 09:45:29.798504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.305 [2024-10-11 09:45:29.798523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.305 [2024-10-11 09:45:29.798537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.305 [2024-10-11 09:45:29.800995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.305 [2024-10-11 09:45:29.801105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.305 BaseBdev2 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 BaseBdev3_malloc 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 true 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 [2024-10-11 09:45:29.896294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.305 [2024-10-11 09:45:29.896358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.305 [2024-10-11 09:45:29.896380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.305 [2024-10-11 09:45:29.896392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.305 [2024-10-11 09:45:29.898866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.305 [2024-10-11 09:45:29.898908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.305 BaseBdev3 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 [2024-10-11 09:45:29.904349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.305 [2024-10-11 09:45:29.906451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.305 [2024-10-11 09:45:29.906538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.305 [2024-10-11 09:45:29.906799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:45.305 [2024-10-11 09:45:29.906820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.305 [2024-10-11 09:45:29.907131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:45.305 [2024-10-11 09:45:29.907350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:45.305 [2024-10-11 09:45:29.907366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:45.305 [2024-10-11 09:45:29.907552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.305 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.563 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.563 "name": "raid_bdev1", 00:11:45.563 "uuid": "a0464a88-3c63-4d84-a17d-0da4b0ea9fd6", 00:11:45.563 "strip_size_kb": 0, 00:11:45.563 "state": "online", 00:11:45.563 "raid_level": "raid1", 00:11:45.563 "superblock": true, 00:11:45.563 "num_base_bdevs": 3, 00:11:45.563 "num_base_bdevs_discovered": 3, 00:11:45.563 "num_base_bdevs_operational": 3, 00:11:45.563 "base_bdevs_list": [ 00:11:45.563 { 00:11:45.563 "name": "BaseBdev1", 00:11:45.563 "uuid": "4fbd6c8f-566d-55fd-9efe-c533c58dc2a6", 00:11:45.563 "is_configured": true, 00:11:45.563 "data_offset": 2048, 00:11:45.563 "data_size": 63488 00:11:45.563 }, 00:11:45.563 { 00:11:45.563 "name": "BaseBdev2", 00:11:45.563 "uuid": "f10bc570-28d2-57e0-98f6-c7314f31e466", 00:11:45.563 "is_configured": true, 00:11:45.563 "data_offset": 2048, 00:11:45.563 "data_size": 63488 00:11:45.563 }, 00:11:45.563 { 00:11:45.563 "name": "BaseBdev3", 00:11:45.563 "uuid": "a7555927-0f71-59f4-9eeb-f5d485b125ad", 00:11:45.563 "is_configured": true, 00:11:45.563 "data_offset": 2048, 00:11:45.563 "data_size": 63488 00:11:45.563 } 00:11:45.563 ] 00:11:45.563 }' 00:11:45.563 09:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.563 09:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.822 09:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.822 09:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.822 [2024-10-11 09:45:30.449364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.759 [2024-10-11 09:45:31.354529] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:46.759 [2024-10-11 09:45:31.354695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.759 [2024-10-11 09:45:31.354992] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.759 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.017 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.017 "name": "raid_bdev1", 00:11:47.017 "uuid": "a0464a88-3c63-4d84-a17d-0da4b0ea9fd6", 00:11:47.017 "strip_size_kb": 0, 00:11:47.017 "state": "online", 00:11:47.017 "raid_level": "raid1", 00:11:47.017 "superblock": true, 00:11:47.017 "num_base_bdevs": 3, 00:11:47.017 "num_base_bdevs_discovered": 2, 00:11:47.017 "num_base_bdevs_operational": 2, 00:11:47.017 "base_bdevs_list": [ 00:11:47.017 { 00:11:47.017 "name": null, 00:11:47.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.017 "is_configured": false, 00:11:47.017 "data_offset": 0, 00:11:47.017 "data_size": 63488 00:11:47.017 }, 00:11:47.017 { 00:11:47.017 "name": "BaseBdev2", 00:11:47.017 "uuid": "f10bc570-28d2-57e0-98f6-c7314f31e466", 00:11:47.017 "is_configured": true, 00:11:47.017 "data_offset": 2048, 00:11:47.017 "data_size": 63488 00:11:47.017 }, 00:11:47.017 { 00:11:47.017 "name": "BaseBdev3", 00:11:47.017 "uuid": "a7555927-0f71-59f4-9eeb-f5d485b125ad", 00:11:47.017 "is_configured": true, 00:11:47.017 "data_offset": 2048, 00:11:47.017 "data_size": 63488 00:11:47.017 } 00:11:47.017 ] 00:11:47.017 }' 00:11:47.017 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.017 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.276 [2024-10-11 09:45:31.794615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.276 [2024-10-11 09:45:31.794655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.276 [2024-10-11 09:45:31.797868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.276 [2024-10-11 09:45:31.797932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.276 [2024-10-11 09:45:31.798027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.276 [2024-10-11 09:45:31.798042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:47.276 { 00:11:47.276 "results": [ 00:11:47.276 { 00:11:47.276 "job": "raid_bdev1", 00:11:47.276 "core_mask": "0x1", 00:11:47.276 "workload": "randrw", 00:11:47.276 "percentage": 50, 00:11:47.276 "status": "finished", 00:11:47.276 "queue_depth": 1, 00:11:47.276 "io_size": 131072, 00:11:47.276 "runtime": 1.34552, 00:11:47.276 "iops": 12419.73363457994, 00:11:47.276 "mibps": 1552.4667043224924, 00:11:47.276 "io_failed": 0, 00:11:47.276 "io_timeout": 0, 00:11:47.276 "avg_latency_us": 77.24887077230461, 00:11:47.276 "min_latency_us": 28.05938864628821, 00:11:47.276 "max_latency_us": 1781.4917030567685 00:11:47.276 } 00:11:47.276 ], 00:11:47.276 "core_count": 1 00:11:47.276 } 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69700 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69700 ']' 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69700 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69700 00:11:47.276 killing process with pid 69700 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69700' 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69700 00:11:47.276 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69700 00:11:47.276 [2024-10-11 09:45:31.838692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.535 [2024-10-11 09:45:32.096800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hFeDJGjU3C 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:48.909 ************************************ 00:11:48.909 END TEST raid_write_error_test 00:11:48.909 ************************************ 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:48.909 00:11:48.909 real 0m4.866s 00:11:48.909 user 0m5.811s 00:11:48.909 sys 0m0.525s 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.909 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.909 09:45:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:48.909 09:45:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:48.909 09:45:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:48.909 09:45:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:48.909 09:45:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.909 09:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.909 ************************************ 00:11:48.909 START TEST raid_state_function_test 00:11:48.909 ************************************ 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.909 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69845 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69845' 00:11:49.168 Process raid pid: 69845 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69845 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69845 ']' 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.168 09:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.168 [2024-10-11 09:45:33.636698] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:49.168 [2024-10-11 09:45:33.636840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.427 [2024-10-11 09:45:33.827179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.427 [2024-10-11 09:45:33.986500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.685 [2024-10-11 09:45:34.250294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.685 [2024-10-11 09:45:34.250349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.943 [2024-10-11 09:45:34.561473] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.943 [2024-10-11 09:45:34.561538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.943 [2024-10-11 09:45:34.561550] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.943 [2024-10-11 09:45:34.561563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.943 [2024-10-11 09:45:34.561570] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.943 [2024-10-11 09:45:34.561580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.943 [2024-10-11 09:45:34.561588] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.943 [2024-10-11 09:45:34.561598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.943 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.201 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.201 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.201 "name": "Existed_Raid", 00:11:50.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.201 "strip_size_kb": 64, 00:11:50.201 "state": "configuring", 00:11:50.201 "raid_level": "raid0", 00:11:50.201 "superblock": false, 00:11:50.201 "num_base_bdevs": 4, 00:11:50.201 "num_base_bdevs_discovered": 0, 00:11:50.201 "num_base_bdevs_operational": 4, 00:11:50.201 "base_bdevs_list": [ 00:11:50.201 { 00:11:50.201 "name": "BaseBdev1", 00:11:50.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.201 "is_configured": false, 00:11:50.201 "data_offset": 0, 00:11:50.201 "data_size": 0 00:11:50.201 }, 00:11:50.201 { 00:11:50.201 "name": "BaseBdev2", 00:11:50.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.201 "is_configured": false, 00:11:50.201 "data_offset": 0, 00:11:50.201 "data_size": 0 00:11:50.201 }, 00:11:50.201 { 00:11:50.201 "name": "BaseBdev3", 00:11:50.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.201 "is_configured": false, 00:11:50.201 "data_offset": 0, 00:11:50.201 "data_size": 0 00:11:50.201 }, 00:11:50.201 { 00:11:50.201 "name": "BaseBdev4", 00:11:50.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.201 "is_configured": false, 00:11:50.201 "data_offset": 0, 00:11:50.201 "data_size": 0 00:11:50.201 } 00:11:50.201 ] 00:11:50.201 }' 00:11:50.201 09:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.201 09:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.460 [2024-10-11 09:45:35.028697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.460 [2024-10-11 09:45:35.028756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.460 [2024-10-11 09:45:35.036712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.460 [2024-10-11 09:45:35.036770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.460 [2024-10-11 09:45:35.036781] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.460 [2024-10-11 09:45:35.036793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.460 [2024-10-11 09:45:35.036800] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.460 [2024-10-11 09:45:35.036811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.460 [2024-10-11 09:45:35.036819] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.460 [2024-10-11 09:45:35.036829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.460 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.718 [2024-10-11 09:45:35.092220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.718 BaseBdev1 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.718 [ 00:11:50.718 { 00:11:50.718 "name": "BaseBdev1", 00:11:50.718 "aliases": [ 00:11:50.718 "10e1d65e-a83c-43e6-a951-367d8300f8e2" 00:11:50.718 ], 00:11:50.718 "product_name": "Malloc disk", 00:11:50.718 "block_size": 512, 00:11:50.718 "num_blocks": 65536, 00:11:50.718 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:50.718 "assigned_rate_limits": { 00:11:50.718 "rw_ios_per_sec": 0, 00:11:50.718 "rw_mbytes_per_sec": 0, 00:11:50.718 "r_mbytes_per_sec": 0, 00:11:50.718 "w_mbytes_per_sec": 0 00:11:50.718 }, 00:11:50.718 "claimed": true, 00:11:50.718 "claim_type": "exclusive_write", 00:11:50.718 "zoned": false, 00:11:50.718 "supported_io_types": { 00:11:50.718 "read": true, 00:11:50.718 "write": true, 00:11:50.718 "unmap": true, 00:11:50.718 "flush": true, 00:11:50.718 "reset": true, 00:11:50.718 "nvme_admin": false, 00:11:50.718 "nvme_io": false, 00:11:50.718 "nvme_io_md": false, 00:11:50.718 "write_zeroes": true, 00:11:50.718 "zcopy": true, 00:11:50.718 "get_zone_info": false, 00:11:50.718 "zone_management": false, 00:11:50.718 "zone_append": false, 00:11:50.718 "compare": false, 00:11:50.718 "compare_and_write": false, 00:11:50.718 "abort": true, 00:11:50.718 "seek_hole": false, 00:11:50.718 "seek_data": false, 00:11:50.718 "copy": true, 00:11:50.718 "nvme_iov_md": false 00:11:50.718 }, 00:11:50.718 "memory_domains": [ 00:11:50.718 { 00:11:50.718 "dma_device_id": "system", 00:11:50.718 "dma_device_type": 1 00:11:50.718 }, 00:11:50.718 { 00:11:50.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.718 "dma_device_type": 2 00:11:50.718 } 00:11:50.718 ], 00:11:50.718 "driver_specific": {} 00:11:50.718 } 00:11:50.718 ] 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.718 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.719 "name": "Existed_Raid", 00:11:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.719 "strip_size_kb": 64, 00:11:50.719 "state": "configuring", 00:11:50.719 "raid_level": "raid0", 00:11:50.719 "superblock": false, 00:11:50.719 "num_base_bdevs": 4, 00:11:50.719 "num_base_bdevs_discovered": 1, 00:11:50.719 "num_base_bdevs_operational": 4, 00:11:50.719 "base_bdevs_list": [ 00:11:50.719 { 00:11:50.719 "name": "BaseBdev1", 00:11:50.719 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:50.719 "is_configured": true, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 65536 00:11:50.719 }, 00:11:50.719 { 00:11:50.719 "name": "BaseBdev2", 00:11:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.719 "is_configured": false, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 0 00:11:50.719 }, 00:11:50.719 { 00:11:50.719 "name": "BaseBdev3", 00:11:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.719 "is_configured": false, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 0 00:11:50.719 }, 00:11:50.719 { 00:11:50.719 "name": "BaseBdev4", 00:11:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.719 "is_configured": false, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 0 00:11:50.719 } 00:11:50.719 ] 00:11:50.719 }' 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.719 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 [2024-10-11 09:45:35.575572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.978 [2024-10-11 09:45:35.575642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 [2024-10-11 09:45:35.583642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.978 [2024-10-11 09:45:35.585846] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.978 [2024-10-11 09:45:35.585947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.978 [2024-10-11 09:45:35.585963] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.978 [2024-10-11 09:45:35.585978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.978 [2024-10-11 09:45:35.585986] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.978 [2024-10-11 09:45:35.585997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.978 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.237 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.237 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.237 "name": "Existed_Raid", 00:11:51.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.237 "strip_size_kb": 64, 00:11:51.237 "state": "configuring", 00:11:51.237 "raid_level": "raid0", 00:11:51.237 "superblock": false, 00:11:51.237 "num_base_bdevs": 4, 00:11:51.237 "num_base_bdevs_discovered": 1, 00:11:51.237 "num_base_bdevs_operational": 4, 00:11:51.237 "base_bdevs_list": [ 00:11:51.237 { 00:11:51.237 "name": "BaseBdev1", 00:11:51.237 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:51.237 "is_configured": true, 00:11:51.237 "data_offset": 0, 00:11:51.237 "data_size": 65536 00:11:51.237 }, 00:11:51.237 { 00:11:51.237 "name": "BaseBdev2", 00:11:51.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.237 "is_configured": false, 00:11:51.237 "data_offset": 0, 00:11:51.237 "data_size": 0 00:11:51.237 }, 00:11:51.237 { 00:11:51.237 "name": "BaseBdev3", 00:11:51.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.237 "is_configured": false, 00:11:51.237 "data_offset": 0, 00:11:51.237 "data_size": 0 00:11:51.237 }, 00:11:51.237 { 00:11:51.237 "name": "BaseBdev4", 00:11:51.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.237 "is_configured": false, 00:11:51.237 "data_offset": 0, 00:11:51.237 "data_size": 0 00:11:51.237 } 00:11:51.237 ] 00:11:51.237 }' 00:11:51.237 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.237 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.495 [2024-10-11 09:45:36.102254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.495 BaseBdev2 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.495 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.754 [ 00:11:51.754 { 00:11:51.754 "name": "BaseBdev2", 00:11:51.754 "aliases": [ 00:11:51.754 "4241ce66-bad3-4f1c-852f-5265d7d957ff" 00:11:51.754 ], 00:11:51.754 "product_name": "Malloc disk", 00:11:51.754 "block_size": 512, 00:11:51.754 "num_blocks": 65536, 00:11:51.754 "uuid": "4241ce66-bad3-4f1c-852f-5265d7d957ff", 00:11:51.754 "assigned_rate_limits": { 00:11:51.754 "rw_ios_per_sec": 0, 00:11:51.754 "rw_mbytes_per_sec": 0, 00:11:51.754 "r_mbytes_per_sec": 0, 00:11:51.754 "w_mbytes_per_sec": 0 00:11:51.754 }, 00:11:51.754 "claimed": true, 00:11:51.754 "claim_type": "exclusive_write", 00:11:51.754 "zoned": false, 00:11:51.754 "supported_io_types": { 00:11:51.754 "read": true, 00:11:51.754 "write": true, 00:11:51.754 "unmap": true, 00:11:51.754 "flush": true, 00:11:51.754 "reset": true, 00:11:51.754 "nvme_admin": false, 00:11:51.754 "nvme_io": false, 00:11:51.754 "nvme_io_md": false, 00:11:51.754 "write_zeroes": true, 00:11:51.754 "zcopy": true, 00:11:51.754 "get_zone_info": false, 00:11:51.754 "zone_management": false, 00:11:51.754 "zone_append": false, 00:11:51.754 "compare": false, 00:11:51.754 "compare_and_write": false, 00:11:51.754 "abort": true, 00:11:51.754 "seek_hole": false, 00:11:51.754 "seek_data": false, 00:11:51.754 "copy": true, 00:11:51.754 "nvme_iov_md": false 00:11:51.754 }, 00:11:51.754 "memory_domains": [ 00:11:51.754 { 00:11:51.754 "dma_device_id": "system", 00:11:51.754 "dma_device_type": 1 00:11:51.754 }, 00:11:51.754 { 00:11:51.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.754 "dma_device_type": 2 00:11:51.754 } 00:11:51.754 ], 00:11:51.754 "driver_specific": {} 00:11:51.754 } 00:11:51.754 ] 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.754 "name": "Existed_Raid", 00:11:51.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.754 "strip_size_kb": 64, 00:11:51.754 "state": "configuring", 00:11:51.754 "raid_level": "raid0", 00:11:51.754 "superblock": false, 00:11:51.754 "num_base_bdevs": 4, 00:11:51.754 "num_base_bdevs_discovered": 2, 00:11:51.754 "num_base_bdevs_operational": 4, 00:11:51.754 "base_bdevs_list": [ 00:11:51.754 { 00:11:51.754 "name": "BaseBdev1", 00:11:51.754 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:51.754 "is_configured": true, 00:11:51.754 "data_offset": 0, 00:11:51.754 "data_size": 65536 00:11:51.754 }, 00:11:51.754 { 00:11:51.754 "name": "BaseBdev2", 00:11:51.754 "uuid": "4241ce66-bad3-4f1c-852f-5265d7d957ff", 00:11:51.754 "is_configured": true, 00:11:51.754 "data_offset": 0, 00:11:51.754 "data_size": 65536 00:11:51.754 }, 00:11:51.754 { 00:11:51.754 "name": "BaseBdev3", 00:11:51.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.754 "is_configured": false, 00:11:51.754 "data_offset": 0, 00:11:51.754 "data_size": 0 00:11:51.754 }, 00:11:51.754 { 00:11:51.754 "name": "BaseBdev4", 00:11:51.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.754 "is_configured": false, 00:11:51.754 "data_offset": 0, 00:11:51.754 "data_size": 0 00:11:51.754 } 00:11:51.754 ] 00:11:51.754 }' 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.754 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.013 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.013 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.013 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 [2024-10-11 09:45:36.644686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.272 BaseBdev3 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 [ 00:11:52.272 { 00:11:52.272 "name": "BaseBdev3", 00:11:52.272 "aliases": [ 00:11:52.272 "0b0533fd-2edb-44c1-a24d-85c395330e9a" 00:11:52.272 ], 00:11:52.272 "product_name": "Malloc disk", 00:11:52.272 "block_size": 512, 00:11:52.272 "num_blocks": 65536, 00:11:52.272 "uuid": "0b0533fd-2edb-44c1-a24d-85c395330e9a", 00:11:52.272 "assigned_rate_limits": { 00:11:52.272 "rw_ios_per_sec": 0, 00:11:52.272 "rw_mbytes_per_sec": 0, 00:11:52.272 "r_mbytes_per_sec": 0, 00:11:52.272 "w_mbytes_per_sec": 0 00:11:52.272 }, 00:11:52.272 "claimed": true, 00:11:52.272 "claim_type": "exclusive_write", 00:11:52.272 "zoned": false, 00:11:52.272 "supported_io_types": { 00:11:52.272 "read": true, 00:11:52.272 "write": true, 00:11:52.272 "unmap": true, 00:11:52.272 "flush": true, 00:11:52.272 "reset": true, 00:11:52.272 "nvme_admin": false, 00:11:52.272 "nvme_io": false, 00:11:52.272 "nvme_io_md": false, 00:11:52.272 "write_zeroes": true, 00:11:52.272 "zcopy": true, 00:11:52.272 "get_zone_info": false, 00:11:52.272 "zone_management": false, 00:11:52.272 "zone_append": false, 00:11:52.272 "compare": false, 00:11:52.272 "compare_and_write": false, 00:11:52.272 "abort": true, 00:11:52.272 "seek_hole": false, 00:11:52.272 "seek_data": false, 00:11:52.272 "copy": true, 00:11:52.272 "nvme_iov_md": false 00:11:52.272 }, 00:11:52.272 "memory_domains": [ 00:11:52.272 { 00:11:52.272 "dma_device_id": "system", 00:11:52.272 "dma_device_type": 1 00:11:52.272 }, 00:11:52.272 { 00:11:52.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.272 "dma_device_type": 2 00:11:52.272 } 00:11:52.272 ], 00:11:52.272 "driver_specific": {} 00:11:52.272 } 00:11:52.272 ] 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.272 "name": "Existed_Raid", 00:11:52.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.272 "strip_size_kb": 64, 00:11:52.272 "state": "configuring", 00:11:52.272 "raid_level": "raid0", 00:11:52.272 "superblock": false, 00:11:52.272 "num_base_bdevs": 4, 00:11:52.272 "num_base_bdevs_discovered": 3, 00:11:52.272 "num_base_bdevs_operational": 4, 00:11:52.272 "base_bdevs_list": [ 00:11:52.272 { 00:11:52.272 "name": "BaseBdev1", 00:11:52.272 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:52.272 "is_configured": true, 00:11:52.272 "data_offset": 0, 00:11:52.272 "data_size": 65536 00:11:52.272 }, 00:11:52.272 { 00:11:52.272 "name": "BaseBdev2", 00:11:52.272 "uuid": "4241ce66-bad3-4f1c-852f-5265d7d957ff", 00:11:52.272 "is_configured": true, 00:11:52.272 "data_offset": 0, 00:11:52.272 "data_size": 65536 00:11:52.272 }, 00:11:52.272 { 00:11:52.272 "name": "BaseBdev3", 00:11:52.272 "uuid": "0b0533fd-2edb-44c1-a24d-85c395330e9a", 00:11:52.272 "is_configured": true, 00:11:52.272 "data_offset": 0, 00:11:52.272 "data_size": 65536 00:11:52.272 }, 00:11:52.272 { 00:11:52.272 "name": "BaseBdev4", 00:11:52.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.272 "is_configured": false, 00:11:52.272 "data_offset": 0, 00:11:52.272 "data_size": 0 00:11:52.272 } 00:11:52.272 ] 00:11:52.272 }' 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.272 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.532 [2024-10-11 09:45:37.152114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.532 [2024-10-11 09:45:37.152172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:52.532 [2024-10-11 09:45:37.152182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:52.532 [2024-10-11 09:45:37.152486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:52.532 [2024-10-11 09:45:37.152663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:52.532 [2024-10-11 09:45:37.152677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:52.532 [2024-10-11 09:45:37.153009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.532 BaseBdev4 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.532 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.791 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.791 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:52.791 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.791 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.791 [ 00:11:52.791 { 00:11:52.791 "name": "BaseBdev4", 00:11:52.791 "aliases": [ 00:11:52.791 "1150d78a-4f54-4ad4-9127-d8cfae918b84" 00:11:52.791 ], 00:11:52.791 "product_name": "Malloc disk", 00:11:52.791 "block_size": 512, 00:11:52.791 "num_blocks": 65536, 00:11:52.791 "uuid": "1150d78a-4f54-4ad4-9127-d8cfae918b84", 00:11:52.791 "assigned_rate_limits": { 00:11:52.791 "rw_ios_per_sec": 0, 00:11:52.791 "rw_mbytes_per_sec": 0, 00:11:52.791 "r_mbytes_per_sec": 0, 00:11:52.791 "w_mbytes_per_sec": 0 00:11:52.791 }, 00:11:52.791 "claimed": true, 00:11:52.791 "claim_type": "exclusive_write", 00:11:52.791 "zoned": false, 00:11:52.791 "supported_io_types": { 00:11:52.791 "read": true, 00:11:52.791 "write": true, 00:11:52.791 "unmap": true, 00:11:52.791 "flush": true, 00:11:52.791 "reset": true, 00:11:52.791 "nvme_admin": false, 00:11:52.791 "nvme_io": false, 00:11:52.791 "nvme_io_md": false, 00:11:52.791 "write_zeroes": true, 00:11:52.791 "zcopy": true, 00:11:52.791 "get_zone_info": false, 00:11:52.791 "zone_management": false, 00:11:52.791 "zone_append": false, 00:11:52.791 "compare": false, 00:11:52.791 "compare_and_write": false, 00:11:52.792 "abort": true, 00:11:52.792 "seek_hole": false, 00:11:52.792 "seek_data": false, 00:11:52.792 "copy": true, 00:11:52.792 "nvme_iov_md": false 00:11:52.792 }, 00:11:52.792 "memory_domains": [ 00:11:52.792 { 00:11:52.792 "dma_device_id": "system", 00:11:52.792 "dma_device_type": 1 00:11:52.792 }, 00:11:52.792 { 00:11:52.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.792 "dma_device_type": 2 00:11:52.792 } 00:11:52.792 ], 00:11:52.792 "driver_specific": {} 00:11:52.792 } 00:11:52.792 ] 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.792 "name": "Existed_Raid", 00:11:52.792 "uuid": "09d86329-4efa-4aa4-a218-3f5a6307dea5", 00:11:52.792 "strip_size_kb": 64, 00:11:52.792 "state": "online", 00:11:52.792 "raid_level": "raid0", 00:11:52.792 "superblock": false, 00:11:52.792 "num_base_bdevs": 4, 00:11:52.792 "num_base_bdevs_discovered": 4, 00:11:52.792 "num_base_bdevs_operational": 4, 00:11:52.792 "base_bdevs_list": [ 00:11:52.792 { 00:11:52.792 "name": "BaseBdev1", 00:11:52.792 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:52.792 "is_configured": true, 00:11:52.792 "data_offset": 0, 00:11:52.792 "data_size": 65536 00:11:52.792 }, 00:11:52.792 { 00:11:52.792 "name": "BaseBdev2", 00:11:52.792 "uuid": "4241ce66-bad3-4f1c-852f-5265d7d957ff", 00:11:52.792 "is_configured": true, 00:11:52.792 "data_offset": 0, 00:11:52.792 "data_size": 65536 00:11:52.792 }, 00:11:52.792 { 00:11:52.792 "name": "BaseBdev3", 00:11:52.792 "uuid": "0b0533fd-2edb-44c1-a24d-85c395330e9a", 00:11:52.792 "is_configured": true, 00:11:52.792 "data_offset": 0, 00:11:52.792 "data_size": 65536 00:11:52.792 }, 00:11:52.792 { 00:11:52.792 "name": "BaseBdev4", 00:11:52.792 "uuid": "1150d78a-4f54-4ad4-9127-d8cfae918b84", 00:11:52.792 "is_configured": true, 00:11:52.792 "data_offset": 0, 00:11:52.792 "data_size": 65536 00:11:52.792 } 00:11:52.792 ] 00:11:52.792 }' 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.792 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.050 [2024-10-11 09:45:37.620046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.050 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.050 "name": "Existed_Raid", 00:11:53.050 "aliases": [ 00:11:53.050 "09d86329-4efa-4aa4-a218-3f5a6307dea5" 00:11:53.050 ], 00:11:53.050 "product_name": "Raid Volume", 00:11:53.050 "block_size": 512, 00:11:53.050 "num_blocks": 262144, 00:11:53.050 "uuid": "09d86329-4efa-4aa4-a218-3f5a6307dea5", 00:11:53.050 "assigned_rate_limits": { 00:11:53.050 "rw_ios_per_sec": 0, 00:11:53.050 "rw_mbytes_per_sec": 0, 00:11:53.050 "r_mbytes_per_sec": 0, 00:11:53.050 "w_mbytes_per_sec": 0 00:11:53.050 }, 00:11:53.050 "claimed": false, 00:11:53.050 "zoned": false, 00:11:53.050 "supported_io_types": { 00:11:53.050 "read": true, 00:11:53.050 "write": true, 00:11:53.050 "unmap": true, 00:11:53.050 "flush": true, 00:11:53.050 "reset": true, 00:11:53.050 "nvme_admin": false, 00:11:53.050 "nvme_io": false, 00:11:53.050 "nvme_io_md": false, 00:11:53.050 "write_zeroes": true, 00:11:53.050 "zcopy": false, 00:11:53.050 "get_zone_info": false, 00:11:53.050 "zone_management": false, 00:11:53.050 "zone_append": false, 00:11:53.050 "compare": false, 00:11:53.050 "compare_and_write": false, 00:11:53.050 "abort": false, 00:11:53.050 "seek_hole": false, 00:11:53.050 "seek_data": false, 00:11:53.051 "copy": false, 00:11:53.051 "nvme_iov_md": false 00:11:53.051 }, 00:11:53.051 "memory_domains": [ 00:11:53.051 { 00:11:53.051 "dma_device_id": "system", 00:11:53.051 "dma_device_type": 1 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.051 "dma_device_type": 2 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "system", 00:11:53.051 "dma_device_type": 1 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.051 "dma_device_type": 2 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "system", 00:11:53.051 "dma_device_type": 1 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.051 "dma_device_type": 2 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "system", 00:11:53.051 "dma_device_type": 1 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.051 "dma_device_type": 2 00:11:53.051 } 00:11:53.051 ], 00:11:53.051 "driver_specific": { 00:11:53.051 "raid": { 00:11:53.051 "uuid": "09d86329-4efa-4aa4-a218-3f5a6307dea5", 00:11:53.051 "strip_size_kb": 64, 00:11:53.051 "state": "online", 00:11:53.051 "raid_level": "raid0", 00:11:53.051 "superblock": false, 00:11:53.051 "num_base_bdevs": 4, 00:11:53.051 "num_base_bdevs_discovered": 4, 00:11:53.051 "num_base_bdevs_operational": 4, 00:11:53.051 "base_bdevs_list": [ 00:11:53.051 { 00:11:53.051 "name": "BaseBdev1", 00:11:53.051 "uuid": "10e1d65e-a83c-43e6-a951-367d8300f8e2", 00:11:53.051 "is_configured": true, 00:11:53.051 "data_offset": 0, 00:11:53.051 "data_size": 65536 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "name": "BaseBdev2", 00:11:53.051 "uuid": "4241ce66-bad3-4f1c-852f-5265d7d957ff", 00:11:53.051 "is_configured": true, 00:11:53.051 "data_offset": 0, 00:11:53.051 "data_size": 65536 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "name": "BaseBdev3", 00:11:53.051 "uuid": "0b0533fd-2edb-44c1-a24d-85c395330e9a", 00:11:53.051 "is_configured": true, 00:11:53.051 "data_offset": 0, 00:11:53.051 "data_size": 65536 00:11:53.051 }, 00:11:53.051 { 00:11:53.051 "name": "BaseBdev4", 00:11:53.051 "uuid": "1150d78a-4f54-4ad4-9127-d8cfae918b84", 00:11:53.051 "is_configured": true, 00:11:53.051 "data_offset": 0, 00:11:53.051 "data_size": 65536 00:11:53.051 } 00:11:53.051 ] 00:11:53.051 } 00:11:53.051 } 00:11:53.051 }' 00:11:53.051 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:53.309 BaseBdev2 00:11:53.309 BaseBdev3 00:11:53.309 BaseBdev4' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.309 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.310 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.310 [2024-10-11 09:45:37.923174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.310 [2024-10-11 09:45:37.923266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.310 [2024-10-11 09:45:37.923359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.568 "name": "Existed_Raid", 00:11:53.568 "uuid": "09d86329-4efa-4aa4-a218-3f5a6307dea5", 00:11:53.568 "strip_size_kb": 64, 00:11:53.568 "state": "offline", 00:11:53.568 "raid_level": "raid0", 00:11:53.568 "superblock": false, 00:11:53.568 "num_base_bdevs": 4, 00:11:53.568 "num_base_bdevs_discovered": 3, 00:11:53.568 "num_base_bdevs_operational": 3, 00:11:53.568 "base_bdevs_list": [ 00:11:53.568 { 00:11:53.568 "name": null, 00:11:53.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.568 "is_configured": false, 00:11:53.568 "data_offset": 0, 00:11:53.568 "data_size": 65536 00:11:53.568 }, 00:11:53.568 { 00:11:53.568 "name": "BaseBdev2", 00:11:53.568 "uuid": "4241ce66-bad3-4f1c-852f-5265d7d957ff", 00:11:53.568 "is_configured": true, 00:11:53.568 "data_offset": 0, 00:11:53.568 "data_size": 65536 00:11:53.568 }, 00:11:53.568 { 00:11:53.568 "name": "BaseBdev3", 00:11:53.568 "uuid": "0b0533fd-2edb-44c1-a24d-85c395330e9a", 00:11:53.568 "is_configured": true, 00:11:53.568 "data_offset": 0, 00:11:53.568 "data_size": 65536 00:11:53.568 }, 00:11:53.568 { 00:11:53.568 "name": "BaseBdev4", 00:11:53.568 "uuid": "1150d78a-4f54-4ad4-9127-d8cfae918b84", 00:11:53.568 "is_configured": true, 00:11:53.568 "data_offset": 0, 00:11:53.568 "data_size": 65536 00:11:53.568 } 00:11:53.568 ] 00:11:53.568 }' 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.568 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.133 [2024-10-11 09:45:38.538569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.133 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.133 [2024-10-11 09:45:38.694802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.391 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.391 [2024-10-11 09:45:38.860678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:54.391 [2024-10-11 09:45:38.860821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.392 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.392 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.650 BaseBdev2 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.650 [ 00:11:54.650 { 00:11:54.650 "name": "BaseBdev2", 00:11:54.650 "aliases": [ 00:11:54.650 "2acc747d-70c4-41a6-bc7d-99e50bbb6578" 00:11:54.650 ], 00:11:54.650 "product_name": "Malloc disk", 00:11:54.650 "block_size": 512, 00:11:54.650 "num_blocks": 65536, 00:11:54.650 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:54.650 "assigned_rate_limits": { 00:11:54.650 "rw_ios_per_sec": 0, 00:11:54.650 "rw_mbytes_per_sec": 0, 00:11:54.650 "r_mbytes_per_sec": 0, 00:11:54.650 "w_mbytes_per_sec": 0 00:11:54.650 }, 00:11:54.650 "claimed": false, 00:11:54.650 "zoned": false, 00:11:54.650 "supported_io_types": { 00:11:54.650 "read": true, 00:11:54.650 "write": true, 00:11:54.650 "unmap": true, 00:11:54.650 "flush": true, 00:11:54.650 "reset": true, 00:11:54.650 "nvme_admin": false, 00:11:54.650 "nvme_io": false, 00:11:54.650 "nvme_io_md": false, 00:11:54.650 "write_zeroes": true, 00:11:54.650 "zcopy": true, 00:11:54.650 "get_zone_info": false, 00:11:54.650 "zone_management": false, 00:11:54.650 "zone_append": false, 00:11:54.650 "compare": false, 00:11:54.650 "compare_and_write": false, 00:11:54.650 "abort": true, 00:11:54.650 "seek_hole": false, 00:11:54.650 "seek_data": false, 00:11:54.650 "copy": true, 00:11:54.650 "nvme_iov_md": false 00:11:54.650 }, 00:11:54.650 "memory_domains": [ 00:11:54.650 { 00:11:54.650 "dma_device_id": "system", 00:11:54.650 "dma_device_type": 1 00:11:54.650 }, 00:11:54.650 { 00:11:54.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.650 "dma_device_type": 2 00:11:54.650 } 00:11:54.650 ], 00:11:54.650 "driver_specific": {} 00:11:54.650 } 00:11:54.650 ] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.650 BaseBdev3 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.650 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.650 [ 00:11:54.650 { 00:11:54.650 "name": "BaseBdev3", 00:11:54.650 "aliases": [ 00:11:54.650 "5a8a65f1-799c-42df-bae2-ad953f2d7b87" 00:11:54.650 ], 00:11:54.650 "product_name": "Malloc disk", 00:11:54.650 "block_size": 512, 00:11:54.650 "num_blocks": 65536, 00:11:54.650 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:54.650 "assigned_rate_limits": { 00:11:54.650 "rw_ios_per_sec": 0, 00:11:54.650 "rw_mbytes_per_sec": 0, 00:11:54.650 "r_mbytes_per_sec": 0, 00:11:54.650 "w_mbytes_per_sec": 0 00:11:54.650 }, 00:11:54.650 "claimed": false, 00:11:54.650 "zoned": false, 00:11:54.650 "supported_io_types": { 00:11:54.650 "read": true, 00:11:54.650 "write": true, 00:11:54.650 "unmap": true, 00:11:54.650 "flush": true, 00:11:54.650 "reset": true, 00:11:54.651 "nvme_admin": false, 00:11:54.651 "nvme_io": false, 00:11:54.651 "nvme_io_md": false, 00:11:54.651 "write_zeroes": true, 00:11:54.651 "zcopy": true, 00:11:54.651 "get_zone_info": false, 00:11:54.651 "zone_management": false, 00:11:54.651 "zone_append": false, 00:11:54.651 "compare": false, 00:11:54.651 "compare_and_write": false, 00:11:54.651 "abort": true, 00:11:54.651 "seek_hole": false, 00:11:54.651 "seek_data": false, 00:11:54.651 "copy": true, 00:11:54.651 "nvme_iov_md": false 00:11:54.651 }, 00:11:54.651 "memory_domains": [ 00:11:54.651 { 00:11:54.651 "dma_device_id": "system", 00:11:54.651 "dma_device_type": 1 00:11:54.651 }, 00:11:54.651 { 00:11:54.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.651 "dma_device_type": 2 00:11:54.651 } 00:11:54.651 ], 00:11:54.651 "driver_specific": {} 00:11:54.651 } 00:11:54.651 ] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.651 BaseBdev4 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.651 [ 00:11:54.651 { 00:11:54.651 "name": "BaseBdev4", 00:11:54.651 "aliases": [ 00:11:54.651 "bdd754fb-c64b-4aa5-ab78-84000a4366c3" 00:11:54.651 ], 00:11:54.651 "product_name": "Malloc disk", 00:11:54.651 "block_size": 512, 00:11:54.651 "num_blocks": 65536, 00:11:54.651 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:54.651 "assigned_rate_limits": { 00:11:54.651 "rw_ios_per_sec": 0, 00:11:54.651 "rw_mbytes_per_sec": 0, 00:11:54.651 "r_mbytes_per_sec": 0, 00:11:54.651 "w_mbytes_per_sec": 0 00:11:54.651 }, 00:11:54.651 "claimed": false, 00:11:54.651 "zoned": false, 00:11:54.651 "supported_io_types": { 00:11:54.651 "read": true, 00:11:54.651 "write": true, 00:11:54.651 "unmap": true, 00:11:54.651 "flush": true, 00:11:54.651 "reset": true, 00:11:54.651 "nvme_admin": false, 00:11:54.651 "nvme_io": false, 00:11:54.651 "nvme_io_md": false, 00:11:54.651 "write_zeroes": true, 00:11:54.651 "zcopy": true, 00:11:54.651 "get_zone_info": false, 00:11:54.651 "zone_management": false, 00:11:54.651 "zone_append": false, 00:11:54.651 "compare": false, 00:11:54.651 "compare_and_write": false, 00:11:54.651 "abort": true, 00:11:54.651 "seek_hole": false, 00:11:54.651 "seek_data": false, 00:11:54.651 "copy": true, 00:11:54.651 "nvme_iov_md": false 00:11:54.651 }, 00:11:54.651 "memory_domains": [ 00:11:54.651 { 00:11:54.651 "dma_device_id": "system", 00:11:54.651 "dma_device_type": 1 00:11:54.651 }, 00:11:54.651 { 00:11:54.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.651 "dma_device_type": 2 00:11:54.651 } 00:11:54.651 ], 00:11:54.651 "driver_specific": {} 00:11:54.651 } 00:11:54.651 ] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.651 [2024-10-11 09:45:39.251987] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.651 [2024-10-11 09:45:39.252085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.651 [2024-10-11 09:45:39.252139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.651 [2024-10-11 09:45:39.254274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.651 [2024-10-11 09:45:39.254382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.651 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.909 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.909 "name": "Existed_Raid", 00:11:54.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.909 "strip_size_kb": 64, 00:11:54.909 "state": "configuring", 00:11:54.909 "raid_level": "raid0", 00:11:54.909 "superblock": false, 00:11:54.909 "num_base_bdevs": 4, 00:11:54.909 "num_base_bdevs_discovered": 3, 00:11:54.909 "num_base_bdevs_operational": 4, 00:11:54.909 "base_bdevs_list": [ 00:11:54.909 { 00:11:54.909 "name": "BaseBdev1", 00:11:54.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.909 "is_configured": false, 00:11:54.909 "data_offset": 0, 00:11:54.909 "data_size": 0 00:11:54.909 }, 00:11:54.909 { 00:11:54.909 "name": "BaseBdev2", 00:11:54.909 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:54.909 "is_configured": true, 00:11:54.909 "data_offset": 0, 00:11:54.909 "data_size": 65536 00:11:54.909 }, 00:11:54.909 { 00:11:54.909 "name": "BaseBdev3", 00:11:54.909 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:54.909 "is_configured": true, 00:11:54.909 "data_offset": 0, 00:11:54.909 "data_size": 65536 00:11:54.909 }, 00:11:54.909 { 00:11:54.909 "name": "BaseBdev4", 00:11:54.909 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:54.909 "is_configured": true, 00:11:54.909 "data_offset": 0, 00:11:54.909 "data_size": 65536 00:11:54.909 } 00:11:54.909 ] 00:11:54.909 }' 00:11:54.909 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.909 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.168 [2024-10-11 09:45:39.743251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.168 "name": "Existed_Raid", 00:11:55.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.168 "strip_size_kb": 64, 00:11:55.168 "state": "configuring", 00:11:55.168 "raid_level": "raid0", 00:11:55.168 "superblock": false, 00:11:55.168 "num_base_bdevs": 4, 00:11:55.168 "num_base_bdevs_discovered": 2, 00:11:55.168 "num_base_bdevs_operational": 4, 00:11:55.168 "base_bdevs_list": [ 00:11:55.168 { 00:11:55.168 "name": "BaseBdev1", 00:11:55.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.168 "is_configured": false, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 0 00:11:55.168 }, 00:11:55.168 { 00:11:55.168 "name": null, 00:11:55.168 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:55.168 "is_configured": false, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 65536 00:11:55.168 }, 00:11:55.168 { 00:11:55.168 "name": "BaseBdev3", 00:11:55.168 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:55.168 "is_configured": true, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 65536 00:11:55.168 }, 00:11:55.168 { 00:11:55.168 "name": "BaseBdev4", 00:11:55.168 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:55.168 "is_configured": true, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 65536 00:11:55.168 } 00:11:55.168 ] 00:11:55.168 }' 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.168 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 [2024-10-11 09:45:40.275850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.743 BaseBdev1 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 [ 00:11:55.743 { 00:11:55.743 "name": "BaseBdev1", 00:11:55.743 "aliases": [ 00:11:55.743 "007d2ef3-db2f-4132-bdc5-9b24072f8f6e" 00:11:55.743 ], 00:11:55.743 "product_name": "Malloc disk", 00:11:55.743 "block_size": 512, 00:11:55.743 "num_blocks": 65536, 00:11:55.743 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:55.743 "assigned_rate_limits": { 00:11:55.743 "rw_ios_per_sec": 0, 00:11:55.743 "rw_mbytes_per_sec": 0, 00:11:55.743 "r_mbytes_per_sec": 0, 00:11:55.743 "w_mbytes_per_sec": 0 00:11:55.743 }, 00:11:55.743 "claimed": true, 00:11:55.743 "claim_type": "exclusive_write", 00:11:55.743 "zoned": false, 00:11:55.743 "supported_io_types": { 00:11:55.743 "read": true, 00:11:55.743 "write": true, 00:11:55.743 "unmap": true, 00:11:55.743 "flush": true, 00:11:55.743 "reset": true, 00:11:55.743 "nvme_admin": false, 00:11:55.743 "nvme_io": false, 00:11:55.743 "nvme_io_md": false, 00:11:55.743 "write_zeroes": true, 00:11:55.743 "zcopy": true, 00:11:55.743 "get_zone_info": false, 00:11:55.743 "zone_management": false, 00:11:55.743 "zone_append": false, 00:11:55.743 "compare": false, 00:11:55.743 "compare_and_write": false, 00:11:55.743 "abort": true, 00:11:55.743 "seek_hole": false, 00:11:55.743 "seek_data": false, 00:11:55.743 "copy": true, 00:11:55.743 "nvme_iov_md": false 00:11:55.743 }, 00:11:55.743 "memory_domains": [ 00:11:55.743 { 00:11:55.743 "dma_device_id": "system", 00:11:55.743 "dma_device_type": 1 00:11:55.743 }, 00:11:55.743 { 00:11:55.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.743 "dma_device_type": 2 00:11:55.743 } 00:11:55.743 ], 00:11:55.743 "driver_specific": {} 00:11:55.743 } 00:11:55.743 ] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.743 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.743 "name": "Existed_Raid", 00:11:55.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.743 "strip_size_kb": 64, 00:11:55.743 "state": "configuring", 00:11:55.743 "raid_level": "raid0", 00:11:55.743 "superblock": false, 00:11:55.743 "num_base_bdevs": 4, 00:11:55.743 "num_base_bdevs_discovered": 3, 00:11:55.743 "num_base_bdevs_operational": 4, 00:11:55.743 "base_bdevs_list": [ 00:11:55.743 { 00:11:55.743 "name": "BaseBdev1", 00:11:55.743 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:55.743 "is_configured": true, 00:11:55.743 "data_offset": 0, 00:11:55.743 "data_size": 65536 00:11:55.743 }, 00:11:55.744 { 00:11:55.744 "name": null, 00:11:55.744 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:55.744 "is_configured": false, 00:11:55.744 "data_offset": 0, 00:11:55.744 "data_size": 65536 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "name": "BaseBdev3", 00:11:55.744 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:55.744 "is_configured": true, 00:11:55.744 "data_offset": 0, 00:11:55.744 "data_size": 65536 00:11:55.744 }, 00:11:55.744 { 00:11:55.744 "name": "BaseBdev4", 00:11:55.744 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:55.744 "is_configured": true, 00:11:55.744 "data_offset": 0, 00:11:55.744 "data_size": 65536 00:11:55.744 } 00:11:55.744 ] 00:11:55.744 }' 00:11:55.744 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.744 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 [2024-10-11 09:45:40.815064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.311 "name": "Existed_Raid", 00:11:56.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.311 "strip_size_kb": 64, 00:11:56.311 "state": "configuring", 00:11:56.311 "raid_level": "raid0", 00:11:56.311 "superblock": false, 00:11:56.311 "num_base_bdevs": 4, 00:11:56.311 "num_base_bdevs_discovered": 2, 00:11:56.311 "num_base_bdevs_operational": 4, 00:11:56.311 "base_bdevs_list": [ 00:11:56.311 { 00:11:56.311 "name": "BaseBdev1", 00:11:56.311 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:56.311 "is_configured": true, 00:11:56.311 "data_offset": 0, 00:11:56.311 "data_size": 65536 00:11:56.311 }, 00:11:56.311 { 00:11:56.311 "name": null, 00:11:56.311 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:56.311 "is_configured": false, 00:11:56.311 "data_offset": 0, 00:11:56.311 "data_size": 65536 00:11:56.311 }, 00:11:56.311 { 00:11:56.311 "name": null, 00:11:56.311 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:56.311 "is_configured": false, 00:11:56.311 "data_offset": 0, 00:11:56.311 "data_size": 65536 00:11:56.311 }, 00:11:56.311 { 00:11:56.311 "name": "BaseBdev4", 00:11:56.311 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:56.311 "is_configured": true, 00:11:56.311 "data_offset": 0, 00:11:56.311 "data_size": 65536 00:11:56.311 } 00:11:56.311 ] 00:11:56.311 }' 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.311 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.879 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.880 [2024-10-11 09:45:41.322253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.880 "name": "Existed_Raid", 00:11:56.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.880 "strip_size_kb": 64, 00:11:56.880 "state": "configuring", 00:11:56.880 "raid_level": "raid0", 00:11:56.880 "superblock": false, 00:11:56.880 "num_base_bdevs": 4, 00:11:56.880 "num_base_bdevs_discovered": 3, 00:11:56.880 "num_base_bdevs_operational": 4, 00:11:56.880 "base_bdevs_list": [ 00:11:56.880 { 00:11:56.880 "name": "BaseBdev1", 00:11:56.880 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:56.880 "is_configured": true, 00:11:56.880 "data_offset": 0, 00:11:56.880 "data_size": 65536 00:11:56.880 }, 00:11:56.880 { 00:11:56.880 "name": null, 00:11:56.880 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:56.880 "is_configured": false, 00:11:56.880 "data_offset": 0, 00:11:56.880 "data_size": 65536 00:11:56.880 }, 00:11:56.880 { 00:11:56.880 "name": "BaseBdev3", 00:11:56.880 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:56.880 "is_configured": true, 00:11:56.880 "data_offset": 0, 00:11:56.880 "data_size": 65536 00:11:56.880 }, 00:11:56.880 { 00:11:56.880 "name": "BaseBdev4", 00:11:56.880 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:56.880 "is_configured": true, 00:11:56.880 "data_offset": 0, 00:11:56.880 "data_size": 65536 00:11:56.880 } 00:11:56.880 ] 00:11:56.880 }' 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.880 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.138 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.138 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.138 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 [2024-10-11 09:45:41.773519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.397 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.398 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.398 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.398 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.398 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.398 "name": "Existed_Raid", 00:11:57.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.398 "strip_size_kb": 64, 00:11:57.398 "state": "configuring", 00:11:57.398 "raid_level": "raid0", 00:11:57.398 "superblock": false, 00:11:57.398 "num_base_bdevs": 4, 00:11:57.398 "num_base_bdevs_discovered": 2, 00:11:57.398 "num_base_bdevs_operational": 4, 00:11:57.398 "base_bdevs_list": [ 00:11:57.398 { 00:11:57.398 "name": null, 00:11:57.398 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:57.398 "is_configured": false, 00:11:57.398 "data_offset": 0, 00:11:57.398 "data_size": 65536 00:11:57.398 }, 00:11:57.398 { 00:11:57.398 "name": null, 00:11:57.398 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:57.398 "is_configured": false, 00:11:57.398 "data_offset": 0, 00:11:57.398 "data_size": 65536 00:11:57.398 }, 00:11:57.398 { 00:11:57.398 "name": "BaseBdev3", 00:11:57.398 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:57.398 "is_configured": true, 00:11:57.398 "data_offset": 0, 00:11:57.398 "data_size": 65536 00:11:57.398 }, 00:11:57.398 { 00:11:57.398 "name": "BaseBdev4", 00:11:57.398 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:57.398 "is_configured": true, 00:11:57.398 "data_offset": 0, 00:11:57.398 "data_size": 65536 00:11:57.398 } 00:11:57.398 ] 00:11:57.398 }' 00:11:57.398 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.398 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 [2024-10-11 09:45:42.371854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.965 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.965 "name": "Existed_Raid", 00:11:57.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.965 "strip_size_kb": 64, 00:11:57.965 "state": "configuring", 00:11:57.965 "raid_level": "raid0", 00:11:57.965 "superblock": false, 00:11:57.965 "num_base_bdevs": 4, 00:11:57.965 "num_base_bdevs_discovered": 3, 00:11:57.965 "num_base_bdevs_operational": 4, 00:11:57.965 "base_bdevs_list": [ 00:11:57.965 { 00:11:57.965 "name": null, 00:11:57.965 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:57.965 "is_configured": false, 00:11:57.965 "data_offset": 0, 00:11:57.965 "data_size": 65536 00:11:57.965 }, 00:11:57.965 { 00:11:57.965 "name": "BaseBdev2", 00:11:57.965 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:57.965 "is_configured": true, 00:11:57.965 "data_offset": 0, 00:11:57.965 "data_size": 65536 00:11:57.965 }, 00:11:57.965 { 00:11:57.965 "name": "BaseBdev3", 00:11:57.965 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:57.965 "is_configured": true, 00:11:57.965 "data_offset": 0, 00:11:57.965 "data_size": 65536 00:11:57.965 }, 00:11:57.965 { 00:11:57.965 "name": "BaseBdev4", 00:11:57.965 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:57.965 "is_configured": true, 00:11:57.965 "data_offset": 0, 00:11:57.965 "data_size": 65536 00:11:57.965 } 00:11:57.965 ] 00:11:57.965 }' 00:11:57.966 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.966 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.224 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.224 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 007d2ef3-db2f-4132-bdc5-9b24072f8f6e 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 [2024-10-11 09:45:42.992330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.482 [2024-10-11 09:45:42.992390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:58.482 [2024-10-11 09:45:42.992399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:58.482 [2024-10-11 09:45:42.992698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:58.482 [2024-10-11 09:45:42.992893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:58.482 [2024-10-11 09:45:42.992913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:58.482 [2024-10-11 09:45:42.993171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.482 NewBaseBdev 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.482 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 [ 00:11:58.482 { 00:11:58.482 "name": "NewBaseBdev", 00:11:58.482 "aliases": [ 00:11:58.482 "007d2ef3-db2f-4132-bdc5-9b24072f8f6e" 00:11:58.482 ], 00:11:58.482 "product_name": "Malloc disk", 00:11:58.482 "block_size": 512, 00:11:58.482 "num_blocks": 65536, 00:11:58.482 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:58.482 "assigned_rate_limits": { 00:11:58.482 "rw_ios_per_sec": 0, 00:11:58.482 "rw_mbytes_per_sec": 0, 00:11:58.482 "r_mbytes_per_sec": 0, 00:11:58.482 "w_mbytes_per_sec": 0 00:11:58.482 }, 00:11:58.482 "claimed": true, 00:11:58.482 "claim_type": "exclusive_write", 00:11:58.482 "zoned": false, 00:11:58.482 "supported_io_types": { 00:11:58.482 "read": true, 00:11:58.482 "write": true, 00:11:58.482 "unmap": true, 00:11:58.482 "flush": true, 00:11:58.482 "reset": true, 00:11:58.482 "nvme_admin": false, 00:11:58.482 "nvme_io": false, 00:11:58.482 "nvme_io_md": false, 00:11:58.482 "write_zeroes": true, 00:11:58.482 "zcopy": true, 00:11:58.482 "get_zone_info": false, 00:11:58.482 "zone_management": false, 00:11:58.482 "zone_append": false, 00:11:58.482 "compare": false, 00:11:58.482 "compare_and_write": false, 00:11:58.482 "abort": true, 00:11:58.482 "seek_hole": false, 00:11:58.482 "seek_data": false, 00:11:58.482 "copy": true, 00:11:58.482 "nvme_iov_md": false 00:11:58.482 }, 00:11:58.482 "memory_domains": [ 00:11:58.482 { 00:11:58.482 "dma_device_id": "system", 00:11:58.482 "dma_device_type": 1 00:11:58.482 }, 00:11:58.482 { 00:11:58.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.482 "dma_device_type": 2 00:11:58.482 } 00:11:58.482 ], 00:11:58.482 "driver_specific": {} 00:11:58.482 } 00:11:58.482 ] 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.482 "name": "Existed_Raid", 00:11:58.482 "uuid": "a45dc0c3-a058-4d9f-987d-402c1088dcef", 00:11:58.482 "strip_size_kb": 64, 00:11:58.482 "state": "online", 00:11:58.482 "raid_level": "raid0", 00:11:58.482 "superblock": false, 00:11:58.482 "num_base_bdevs": 4, 00:11:58.482 "num_base_bdevs_discovered": 4, 00:11:58.482 "num_base_bdevs_operational": 4, 00:11:58.482 "base_bdevs_list": [ 00:11:58.482 { 00:11:58.482 "name": "NewBaseBdev", 00:11:58.482 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:58.482 "is_configured": true, 00:11:58.482 "data_offset": 0, 00:11:58.482 "data_size": 65536 00:11:58.482 }, 00:11:58.482 { 00:11:58.482 "name": "BaseBdev2", 00:11:58.482 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:58.482 "is_configured": true, 00:11:58.482 "data_offset": 0, 00:11:58.482 "data_size": 65536 00:11:58.482 }, 00:11:58.482 { 00:11:58.482 "name": "BaseBdev3", 00:11:58.482 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:58.482 "is_configured": true, 00:11:58.482 "data_offset": 0, 00:11:58.482 "data_size": 65536 00:11:58.482 }, 00:11:58.482 { 00:11:58.482 "name": "BaseBdev4", 00:11:58.482 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:58.482 "is_configured": true, 00:11:58.482 "data_offset": 0, 00:11:58.482 "data_size": 65536 00:11:58.482 } 00:11:58.482 ] 00:11:58.482 }' 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.482 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.050 [2024-10-11 09:45:43.512202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.050 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.050 "name": "Existed_Raid", 00:11:59.050 "aliases": [ 00:11:59.050 "a45dc0c3-a058-4d9f-987d-402c1088dcef" 00:11:59.050 ], 00:11:59.050 "product_name": "Raid Volume", 00:11:59.050 "block_size": 512, 00:11:59.050 "num_blocks": 262144, 00:11:59.050 "uuid": "a45dc0c3-a058-4d9f-987d-402c1088dcef", 00:11:59.050 "assigned_rate_limits": { 00:11:59.050 "rw_ios_per_sec": 0, 00:11:59.050 "rw_mbytes_per_sec": 0, 00:11:59.050 "r_mbytes_per_sec": 0, 00:11:59.050 "w_mbytes_per_sec": 0 00:11:59.050 }, 00:11:59.050 "claimed": false, 00:11:59.050 "zoned": false, 00:11:59.050 "supported_io_types": { 00:11:59.050 "read": true, 00:11:59.050 "write": true, 00:11:59.050 "unmap": true, 00:11:59.050 "flush": true, 00:11:59.050 "reset": true, 00:11:59.050 "nvme_admin": false, 00:11:59.050 "nvme_io": false, 00:11:59.050 "nvme_io_md": false, 00:11:59.050 "write_zeroes": true, 00:11:59.050 "zcopy": false, 00:11:59.050 "get_zone_info": false, 00:11:59.050 "zone_management": false, 00:11:59.050 "zone_append": false, 00:11:59.050 "compare": false, 00:11:59.050 "compare_and_write": false, 00:11:59.050 "abort": false, 00:11:59.050 "seek_hole": false, 00:11:59.050 "seek_data": false, 00:11:59.050 "copy": false, 00:11:59.050 "nvme_iov_md": false 00:11:59.050 }, 00:11:59.050 "memory_domains": [ 00:11:59.050 { 00:11:59.050 "dma_device_id": "system", 00:11:59.050 "dma_device_type": 1 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.050 "dma_device_type": 2 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "system", 00:11:59.050 "dma_device_type": 1 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.050 "dma_device_type": 2 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "system", 00:11:59.050 "dma_device_type": 1 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.050 "dma_device_type": 2 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "system", 00:11:59.050 "dma_device_type": 1 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.050 "dma_device_type": 2 00:11:59.050 } 00:11:59.050 ], 00:11:59.050 "driver_specific": { 00:11:59.050 "raid": { 00:11:59.050 "uuid": "a45dc0c3-a058-4d9f-987d-402c1088dcef", 00:11:59.050 "strip_size_kb": 64, 00:11:59.050 "state": "online", 00:11:59.050 "raid_level": "raid0", 00:11:59.050 "superblock": false, 00:11:59.050 "num_base_bdevs": 4, 00:11:59.050 "num_base_bdevs_discovered": 4, 00:11:59.050 "num_base_bdevs_operational": 4, 00:11:59.050 "base_bdevs_list": [ 00:11:59.050 { 00:11:59.050 "name": "NewBaseBdev", 00:11:59.050 "uuid": "007d2ef3-db2f-4132-bdc5-9b24072f8f6e", 00:11:59.050 "is_configured": true, 00:11:59.050 "data_offset": 0, 00:11:59.050 "data_size": 65536 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "name": "BaseBdev2", 00:11:59.050 "uuid": "2acc747d-70c4-41a6-bc7d-99e50bbb6578", 00:11:59.050 "is_configured": true, 00:11:59.050 "data_offset": 0, 00:11:59.050 "data_size": 65536 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "name": "BaseBdev3", 00:11:59.050 "uuid": "5a8a65f1-799c-42df-bae2-ad953f2d7b87", 00:11:59.050 "is_configured": true, 00:11:59.050 "data_offset": 0, 00:11:59.050 "data_size": 65536 00:11:59.050 }, 00:11:59.050 { 00:11:59.050 "name": "BaseBdev4", 00:11:59.050 "uuid": "bdd754fb-c64b-4aa5-ab78-84000a4366c3", 00:11:59.051 "is_configured": true, 00:11:59.051 "data_offset": 0, 00:11:59.051 "data_size": 65536 00:11:59.051 } 00:11:59.051 ] 00:11:59.051 } 00:11:59.051 } 00:11:59.051 }' 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:59.051 BaseBdev2 00:11:59.051 BaseBdev3 00:11:59.051 BaseBdev4' 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.051 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.309 [2024-10-11 09:45:43.839063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.309 [2024-10-11 09:45:43.839100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.309 [2024-10-11 09:45:43.839193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.309 [2024-10-11 09:45:43.839273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.309 [2024-10-11 09:45:43.839285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69845 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69845 ']' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69845 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69845 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.309 killing process with pid 69845 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69845' 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69845 00:11:59.309 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69845 00:11:59.309 [2024-10-11 09:45:43.875964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.947 [2024-10-11 09:45:44.332639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:01.325 00:12:01.325 real 0m12.090s 00:12:01.325 user 0m19.121s 00:12:01.325 sys 0m1.890s 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 ************************************ 00:12:01.325 END TEST raid_state_function_test 00:12:01.325 ************************************ 00:12:01.325 09:45:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:01.325 09:45:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:01.325 09:45:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.325 09:45:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.325 ************************************ 00:12:01.325 START TEST raid_state_function_test_sb 00:12:01.325 ************************************ 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:01.325 Process raid pid: 70530 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70530 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70530' 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70530 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70530 ']' 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.325 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.326 09:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:01.326 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.326 09:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.326 [2024-10-11 09:45:45.773259] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:01.326 [2024-10-11 09:45:45.773483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.326 [2024-10-11 09:45:45.938051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.584 [2024-10-11 09:45:46.088388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.843 [2024-10-11 09:45:46.347760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.843 [2024-10-11 09:45:46.347814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.101 [2024-10-11 09:45:46.668073] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.101 [2024-10-11 09:45:46.668132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.101 [2024-10-11 09:45:46.668145] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.101 [2024-10-11 09:45:46.668157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.101 [2024-10-11 09:45:46.668170] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.101 [2024-10-11 09:45:46.668181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.101 [2024-10-11 09:45:46.668189] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.101 [2024-10-11 09:45:46.668199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.101 "name": "Existed_Raid", 00:12:02.101 "uuid": "aa22b661-6e71-4e4c-bd02-415baa3dcb0b", 00:12:02.101 "strip_size_kb": 64, 00:12:02.101 "state": "configuring", 00:12:02.101 "raid_level": "raid0", 00:12:02.101 "superblock": true, 00:12:02.101 "num_base_bdevs": 4, 00:12:02.101 "num_base_bdevs_discovered": 0, 00:12:02.101 "num_base_bdevs_operational": 4, 00:12:02.101 "base_bdevs_list": [ 00:12:02.101 { 00:12:02.101 "name": "BaseBdev1", 00:12:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.101 "is_configured": false, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 0 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "name": "BaseBdev2", 00:12:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.101 "is_configured": false, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 0 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "name": "BaseBdev3", 00:12:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.101 "is_configured": false, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 0 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "name": "BaseBdev4", 00:12:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.101 "is_configured": false, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 0 00:12:02.101 } 00:12:02.101 ] 00:12:02.101 }' 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.101 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 [2024-10-11 09:45:47.135294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.667 [2024-10-11 09:45:47.135349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 [2024-10-11 09:45:47.143304] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.667 [2024-10-11 09:45:47.143357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.667 [2024-10-11 09:45:47.143368] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.667 [2024-10-11 09:45:47.143379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.667 [2024-10-11 09:45:47.143387] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.667 [2024-10-11 09:45:47.143398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.667 [2024-10-11 09:45:47.143406] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.667 [2024-10-11 09:45:47.143416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 [2024-10-11 09:45:47.197433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.667 BaseBdev1 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 [ 00:12:02.667 { 00:12:02.667 "name": "BaseBdev1", 00:12:02.667 "aliases": [ 00:12:02.667 "7460b389-8a5a-43e7-a92b-8c9d03849bce" 00:12:02.667 ], 00:12:02.667 "product_name": "Malloc disk", 00:12:02.667 "block_size": 512, 00:12:02.667 "num_blocks": 65536, 00:12:02.667 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:02.667 "assigned_rate_limits": { 00:12:02.667 "rw_ios_per_sec": 0, 00:12:02.667 "rw_mbytes_per_sec": 0, 00:12:02.667 "r_mbytes_per_sec": 0, 00:12:02.667 "w_mbytes_per_sec": 0 00:12:02.667 }, 00:12:02.667 "claimed": true, 00:12:02.667 "claim_type": "exclusive_write", 00:12:02.667 "zoned": false, 00:12:02.667 "supported_io_types": { 00:12:02.667 "read": true, 00:12:02.667 "write": true, 00:12:02.667 "unmap": true, 00:12:02.667 "flush": true, 00:12:02.667 "reset": true, 00:12:02.667 "nvme_admin": false, 00:12:02.667 "nvme_io": false, 00:12:02.667 "nvme_io_md": false, 00:12:02.667 "write_zeroes": true, 00:12:02.667 "zcopy": true, 00:12:02.667 "get_zone_info": false, 00:12:02.667 "zone_management": false, 00:12:02.667 "zone_append": false, 00:12:02.667 "compare": false, 00:12:02.667 "compare_and_write": false, 00:12:02.667 "abort": true, 00:12:02.667 "seek_hole": false, 00:12:02.667 "seek_data": false, 00:12:02.667 "copy": true, 00:12:02.667 "nvme_iov_md": false 00:12:02.667 }, 00:12:02.667 "memory_domains": [ 00:12:02.667 { 00:12:02.667 "dma_device_id": "system", 00:12:02.667 "dma_device_type": 1 00:12:02.667 }, 00:12:02.667 { 00:12:02.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.667 "dma_device_type": 2 00:12:02.667 } 00:12:02.667 ], 00:12:02.667 "driver_specific": {} 00:12:02.667 } 00:12:02.667 ] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.667 "name": "Existed_Raid", 00:12:02.667 "uuid": "cbfc7b28-71b3-47e2-b8a8-f8286a692719", 00:12:02.667 "strip_size_kb": 64, 00:12:02.667 "state": "configuring", 00:12:02.667 "raid_level": "raid0", 00:12:02.667 "superblock": true, 00:12:02.667 "num_base_bdevs": 4, 00:12:02.667 "num_base_bdevs_discovered": 1, 00:12:02.667 "num_base_bdevs_operational": 4, 00:12:02.667 "base_bdevs_list": [ 00:12:02.667 { 00:12:02.667 "name": "BaseBdev1", 00:12:02.667 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:02.667 "is_configured": true, 00:12:02.667 "data_offset": 2048, 00:12:02.667 "data_size": 63488 00:12:02.667 }, 00:12:02.667 { 00:12:02.667 "name": "BaseBdev2", 00:12:02.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.667 "is_configured": false, 00:12:02.667 "data_offset": 0, 00:12:02.667 "data_size": 0 00:12:02.667 }, 00:12:02.667 { 00:12:02.667 "name": "BaseBdev3", 00:12:02.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.667 "is_configured": false, 00:12:02.667 "data_offset": 0, 00:12:02.667 "data_size": 0 00:12:02.667 }, 00:12:02.667 { 00:12:02.667 "name": "BaseBdev4", 00:12:02.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.667 "is_configured": false, 00:12:02.667 "data_offset": 0, 00:12:02.667 "data_size": 0 00:12:02.667 } 00:12:02.667 ] 00:12:02.667 }' 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.667 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 [2024-10-11 09:45:47.688757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.236 [2024-10-11 09:45:47.688951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 [2024-10-11 09:45:47.700852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.236 [2024-10-11 09:45:47.703047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.236 [2024-10-11 09:45:47.703108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.236 [2024-10-11 09:45:47.703121] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.236 [2024-10-11 09:45:47.703134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.236 [2024-10-11 09:45:47.703142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.236 [2024-10-11 09:45:47.703153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.236 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.236 "name": "Existed_Raid", 00:12:03.236 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:03.236 "strip_size_kb": 64, 00:12:03.236 "state": "configuring", 00:12:03.236 "raid_level": "raid0", 00:12:03.236 "superblock": true, 00:12:03.236 "num_base_bdevs": 4, 00:12:03.236 "num_base_bdevs_discovered": 1, 00:12:03.236 "num_base_bdevs_operational": 4, 00:12:03.236 "base_bdevs_list": [ 00:12:03.236 { 00:12:03.236 "name": "BaseBdev1", 00:12:03.236 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:03.236 "is_configured": true, 00:12:03.236 "data_offset": 2048, 00:12:03.236 "data_size": 63488 00:12:03.236 }, 00:12:03.236 { 00:12:03.236 "name": "BaseBdev2", 00:12:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.236 "is_configured": false, 00:12:03.236 "data_offset": 0, 00:12:03.236 "data_size": 0 00:12:03.236 }, 00:12:03.236 { 00:12:03.236 "name": "BaseBdev3", 00:12:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.236 "is_configured": false, 00:12:03.236 "data_offset": 0, 00:12:03.236 "data_size": 0 00:12:03.236 }, 00:12:03.236 { 00:12:03.236 "name": "BaseBdev4", 00:12:03.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.237 "is_configured": false, 00:12:03.237 "data_offset": 0, 00:12:03.237 "data_size": 0 00:12:03.237 } 00:12:03.237 ] 00:12:03.237 }' 00:12:03.237 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.237 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.821 [2024-10-11 09:45:48.204363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.821 BaseBdev2 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.821 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.822 [ 00:12:03.822 { 00:12:03.822 "name": "BaseBdev2", 00:12:03.822 "aliases": [ 00:12:03.822 "fcd9ece0-6de8-4420-a746-7de37a7fd596" 00:12:03.822 ], 00:12:03.822 "product_name": "Malloc disk", 00:12:03.822 "block_size": 512, 00:12:03.822 "num_blocks": 65536, 00:12:03.822 "uuid": "fcd9ece0-6de8-4420-a746-7de37a7fd596", 00:12:03.822 "assigned_rate_limits": { 00:12:03.822 "rw_ios_per_sec": 0, 00:12:03.822 "rw_mbytes_per_sec": 0, 00:12:03.822 "r_mbytes_per_sec": 0, 00:12:03.822 "w_mbytes_per_sec": 0 00:12:03.822 }, 00:12:03.822 "claimed": true, 00:12:03.822 "claim_type": "exclusive_write", 00:12:03.822 "zoned": false, 00:12:03.822 "supported_io_types": { 00:12:03.822 "read": true, 00:12:03.822 "write": true, 00:12:03.822 "unmap": true, 00:12:03.822 "flush": true, 00:12:03.822 "reset": true, 00:12:03.822 "nvme_admin": false, 00:12:03.822 "nvme_io": false, 00:12:03.822 "nvme_io_md": false, 00:12:03.822 "write_zeroes": true, 00:12:03.822 "zcopy": true, 00:12:03.822 "get_zone_info": false, 00:12:03.822 "zone_management": false, 00:12:03.822 "zone_append": false, 00:12:03.822 "compare": false, 00:12:03.822 "compare_and_write": false, 00:12:03.822 "abort": true, 00:12:03.822 "seek_hole": false, 00:12:03.822 "seek_data": false, 00:12:03.822 "copy": true, 00:12:03.822 "nvme_iov_md": false 00:12:03.822 }, 00:12:03.822 "memory_domains": [ 00:12:03.822 { 00:12:03.822 "dma_device_id": "system", 00:12:03.822 "dma_device_type": 1 00:12:03.822 }, 00:12:03.822 { 00:12:03.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.822 "dma_device_type": 2 00:12:03.822 } 00:12:03.822 ], 00:12:03.822 "driver_specific": {} 00:12:03.822 } 00:12:03.822 ] 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.822 "name": "Existed_Raid", 00:12:03.822 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:03.822 "strip_size_kb": 64, 00:12:03.822 "state": "configuring", 00:12:03.822 "raid_level": "raid0", 00:12:03.822 "superblock": true, 00:12:03.822 "num_base_bdevs": 4, 00:12:03.822 "num_base_bdevs_discovered": 2, 00:12:03.822 "num_base_bdevs_operational": 4, 00:12:03.822 "base_bdevs_list": [ 00:12:03.822 { 00:12:03.822 "name": "BaseBdev1", 00:12:03.822 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:03.822 "is_configured": true, 00:12:03.822 "data_offset": 2048, 00:12:03.822 "data_size": 63488 00:12:03.822 }, 00:12:03.822 { 00:12:03.822 "name": "BaseBdev2", 00:12:03.822 "uuid": "fcd9ece0-6de8-4420-a746-7de37a7fd596", 00:12:03.822 "is_configured": true, 00:12:03.822 "data_offset": 2048, 00:12:03.822 "data_size": 63488 00:12:03.822 }, 00:12:03.822 { 00:12:03.822 "name": "BaseBdev3", 00:12:03.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.822 "is_configured": false, 00:12:03.822 "data_offset": 0, 00:12:03.822 "data_size": 0 00:12:03.822 }, 00:12:03.822 { 00:12:03.822 "name": "BaseBdev4", 00:12:03.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.822 "is_configured": false, 00:12:03.822 "data_offset": 0, 00:12:03.822 "data_size": 0 00:12:03.822 } 00:12:03.822 ] 00:12:03.822 }' 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.822 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.081 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.081 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.081 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 [2024-10-11 09:45:48.772134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.341 BaseBdev3 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 [ 00:12:04.341 { 00:12:04.341 "name": "BaseBdev3", 00:12:04.341 "aliases": [ 00:12:04.341 "67a09dd1-2e6e-474c-a2d8-bde5ac3536e4" 00:12:04.341 ], 00:12:04.341 "product_name": "Malloc disk", 00:12:04.341 "block_size": 512, 00:12:04.341 "num_blocks": 65536, 00:12:04.341 "uuid": "67a09dd1-2e6e-474c-a2d8-bde5ac3536e4", 00:12:04.341 "assigned_rate_limits": { 00:12:04.341 "rw_ios_per_sec": 0, 00:12:04.341 "rw_mbytes_per_sec": 0, 00:12:04.341 "r_mbytes_per_sec": 0, 00:12:04.341 "w_mbytes_per_sec": 0 00:12:04.341 }, 00:12:04.341 "claimed": true, 00:12:04.341 "claim_type": "exclusive_write", 00:12:04.341 "zoned": false, 00:12:04.341 "supported_io_types": { 00:12:04.341 "read": true, 00:12:04.341 "write": true, 00:12:04.341 "unmap": true, 00:12:04.341 "flush": true, 00:12:04.341 "reset": true, 00:12:04.341 "nvme_admin": false, 00:12:04.341 "nvme_io": false, 00:12:04.341 "nvme_io_md": false, 00:12:04.341 "write_zeroes": true, 00:12:04.341 "zcopy": true, 00:12:04.341 "get_zone_info": false, 00:12:04.341 "zone_management": false, 00:12:04.341 "zone_append": false, 00:12:04.341 "compare": false, 00:12:04.341 "compare_and_write": false, 00:12:04.341 "abort": true, 00:12:04.341 "seek_hole": false, 00:12:04.341 "seek_data": false, 00:12:04.341 "copy": true, 00:12:04.341 "nvme_iov_md": false 00:12:04.341 }, 00:12:04.341 "memory_domains": [ 00:12:04.341 { 00:12:04.341 "dma_device_id": "system", 00:12:04.341 "dma_device_type": 1 00:12:04.341 }, 00:12:04.341 { 00:12:04.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.341 "dma_device_type": 2 00:12:04.341 } 00:12:04.341 ], 00:12:04.341 "driver_specific": {} 00:12:04.341 } 00:12:04.341 ] 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.341 "name": "Existed_Raid", 00:12:04.341 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:04.341 "strip_size_kb": 64, 00:12:04.341 "state": "configuring", 00:12:04.341 "raid_level": "raid0", 00:12:04.341 "superblock": true, 00:12:04.341 "num_base_bdevs": 4, 00:12:04.341 "num_base_bdevs_discovered": 3, 00:12:04.341 "num_base_bdevs_operational": 4, 00:12:04.341 "base_bdevs_list": [ 00:12:04.341 { 00:12:04.341 "name": "BaseBdev1", 00:12:04.341 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:04.341 "is_configured": true, 00:12:04.341 "data_offset": 2048, 00:12:04.341 "data_size": 63488 00:12:04.341 }, 00:12:04.341 { 00:12:04.341 "name": "BaseBdev2", 00:12:04.341 "uuid": "fcd9ece0-6de8-4420-a746-7de37a7fd596", 00:12:04.341 "is_configured": true, 00:12:04.341 "data_offset": 2048, 00:12:04.341 "data_size": 63488 00:12:04.341 }, 00:12:04.341 { 00:12:04.341 "name": "BaseBdev3", 00:12:04.341 "uuid": "67a09dd1-2e6e-474c-a2d8-bde5ac3536e4", 00:12:04.341 "is_configured": true, 00:12:04.341 "data_offset": 2048, 00:12:04.341 "data_size": 63488 00:12:04.341 }, 00:12:04.341 { 00:12:04.341 "name": "BaseBdev4", 00:12:04.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.341 "is_configured": false, 00:12:04.341 "data_offset": 0, 00:12:04.341 "data_size": 0 00:12:04.341 } 00:12:04.341 ] 00:12:04.341 }' 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.341 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.910 [2024-10-11 09:45:49.369411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.910 [2024-10-11 09:45:49.369792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.910 [2024-10-11 09:45:49.369811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:04.910 BaseBdev4 00:12:04.910 [2024-10-11 09:45:49.370123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.910 [2024-10-11 09:45:49.370299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.910 [2024-10-11 09:45:49.370315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:04.910 [2024-10-11 09:45:49.370471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.910 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.910 [ 00:12:04.910 { 00:12:04.910 "name": "BaseBdev4", 00:12:04.910 "aliases": [ 00:12:04.910 "4a079662-cdeb-4199-b6ff-b222eb259504" 00:12:04.910 ], 00:12:04.910 "product_name": "Malloc disk", 00:12:04.910 "block_size": 512, 00:12:04.910 "num_blocks": 65536, 00:12:04.910 "uuid": "4a079662-cdeb-4199-b6ff-b222eb259504", 00:12:04.910 "assigned_rate_limits": { 00:12:04.910 "rw_ios_per_sec": 0, 00:12:04.910 "rw_mbytes_per_sec": 0, 00:12:04.910 "r_mbytes_per_sec": 0, 00:12:04.910 "w_mbytes_per_sec": 0 00:12:04.910 }, 00:12:04.910 "claimed": true, 00:12:04.910 "claim_type": "exclusive_write", 00:12:04.910 "zoned": false, 00:12:04.910 "supported_io_types": { 00:12:04.911 "read": true, 00:12:04.911 "write": true, 00:12:04.911 "unmap": true, 00:12:04.911 "flush": true, 00:12:04.911 "reset": true, 00:12:04.911 "nvme_admin": false, 00:12:04.911 "nvme_io": false, 00:12:04.911 "nvme_io_md": false, 00:12:04.911 "write_zeroes": true, 00:12:04.911 "zcopy": true, 00:12:04.911 "get_zone_info": false, 00:12:04.911 "zone_management": false, 00:12:04.911 "zone_append": false, 00:12:04.911 "compare": false, 00:12:04.911 "compare_and_write": false, 00:12:04.911 "abort": true, 00:12:04.911 "seek_hole": false, 00:12:04.911 "seek_data": false, 00:12:04.911 "copy": true, 00:12:04.911 "nvme_iov_md": false 00:12:04.911 }, 00:12:04.911 "memory_domains": [ 00:12:04.911 { 00:12:04.911 "dma_device_id": "system", 00:12:04.911 "dma_device_type": 1 00:12:04.911 }, 00:12:04.911 { 00:12:04.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.911 "dma_device_type": 2 00:12:04.911 } 00:12:04.911 ], 00:12:04.911 "driver_specific": {} 00:12:04.911 } 00:12:04.911 ] 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.911 "name": "Existed_Raid", 00:12:04.911 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:04.911 "strip_size_kb": 64, 00:12:04.911 "state": "online", 00:12:04.911 "raid_level": "raid0", 00:12:04.911 "superblock": true, 00:12:04.911 "num_base_bdevs": 4, 00:12:04.911 "num_base_bdevs_discovered": 4, 00:12:04.911 "num_base_bdevs_operational": 4, 00:12:04.911 "base_bdevs_list": [ 00:12:04.911 { 00:12:04.911 "name": "BaseBdev1", 00:12:04.911 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:04.911 "is_configured": true, 00:12:04.911 "data_offset": 2048, 00:12:04.911 "data_size": 63488 00:12:04.911 }, 00:12:04.911 { 00:12:04.911 "name": "BaseBdev2", 00:12:04.911 "uuid": "fcd9ece0-6de8-4420-a746-7de37a7fd596", 00:12:04.911 "is_configured": true, 00:12:04.911 "data_offset": 2048, 00:12:04.911 "data_size": 63488 00:12:04.911 }, 00:12:04.911 { 00:12:04.911 "name": "BaseBdev3", 00:12:04.911 "uuid": "67a09dd1-2e6e-474c-a2d8-bde5ac3536e4", 00:12:04.911 "is_configured": true, 00:12:04.911 "data_offset": 2048, 00:12:04.911 "data_size": 63488 00:12:04.911 }, 00:12:04.911 { 00:12:04.911 "name": "BaseBdev4", 00:12:04.911 "uuid": "4a079662-cdeb-4199-b6ff-b222eb259504", 00:12:04.911 "is_configured": true, 00:12:04.911 "data_offset": 2048, 00:12:04.911 "data_size": 63488 00:12:04.911 } 00:12:04.911 ] 00:12:04.911 }' 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.911 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.479 [2024-10-11 09:45:49.841161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.479 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.479 "name": "Existed_Raid", 00:12:05.479 "aliases": [ 00:12:05.479 "dbece90b-c6a4-4210-96b9-d829d264edd2" 00:12:05.479 ], 00:12:05.479 "product_name": "Raid Volume", 00:12:05.479 "block_size": 512, 00:12:05.479 "num_blocks": 253952, 00:12:05.479 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:05.479 "assigned_rate_limits": { 00:12:05.479 "rw_ios_per_sec": 0, 00:12:05.479 "rw_mbytes_per_sec": 0, 00:12:05.479 "r_mbytes_per_sec": 0, 00:12:05.479 "w_mbytes_per_sec": 0 00:12:05.479 }, 00:12:05.479 "claimed": false, 00:12:05.479 "zoned": false, 00:12:05.479 "supported_io_types": { 00:12:05.479 "read": true, 00:12:05.479 "write": true, 00:12:05.479 "unmap": true, 00:12:05.479 "flush": true, 00:12:05.479 "reset": true, 00:12:05.479 "nvme_admin": false, 00:12:05.479 "nvme_io": false, 00:12:05.479 "nvme_io_md": false, 00:12:05.479 "write_zeroes": true, 00:12:05.479 "zcopy": false, 00:12:05.479 "get_zone_info": false, 00:12:05.479 "zone_management": false, 00:12:05.479 "zone_append": false, 00:12:05.479 "compare": false, 00:12:05.479 "compare_and_write": false, 00:12:05.479 "abort": false, 00:12:05.479 "seek_hole": false, 00:12:05.479 "seek_data": false, 00:12:05.479 "copy": false, 00:12:05.479 "nvme_iov_md": false 00:12:05.479 }, 00:12:05.479 "memory_domains": [ 00:12:05.479 { 00:12:05.479 "dma_device_id": "system", 00:12:05.479 "dma_device_type": 1 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.479 "dma_device_type": 2 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "system", 00:12:05.479 "dma_device_type": 1 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.479 "dma_device_type": 2 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "system", 00:12:05.479 "dma_device_type": 1 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.479 "dma_device_type": 2 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "system", 00:12:05.479 "dma_device_type": 1 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.479 "dma_device_type": 2 00:12:05.479 } 00:12:05.479 ], 00:12:05.479 "driver_specific": { 00:12:05.479 "raid": { 00:12:05.479 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:05.479 "strip_size_kb": 64, 00:12:05.479 "state": "online", 00:12:05.479 "raid_level": "raid0", 00:12:05.479 "superblock": true, 00:12:05.479 "num_base_bdevs": 4, 00:12:05.479 "num_base_bdevs_discovered": 4, 00:12:05.479 "num_base_bdevs_operational": 4, 00:12:05.479 "base_bdevs_list": [ 00:12:05.479 { 00:12:05.479 "name": "BaseBdev1", 00:12:05.479 "uuid": "7460b389-8a5a-43e7-a92b-8c9d03849bce", 00:12:05.479 "is_configured": true, 00:12:05.479 "data_offset": 2048, 00:12:05.479 "data_size": 63488 00:12:05.479 }, 00:12:05.479 { 00:12:05.479 "name": "BaseBdev2", 00:12:05.479 "uuid": "fcd9ece0-6de8-4420-a746-7de37a7fd596", 00:12:05.479 "is_configured": true, 00:12:05.479 "data_offset": 2048, 00:12:05.479 "data_size": 63488 00:12:05.479 }, 00:12:05.480 { 00:12:05.480 "name": "BaseBdev3", 00:12:05.480 "uuid": "67a09dd1-2e6e-474c-a2d8-bde5ac3536e4", 00:12:05.480 "is_configured": true, 00:12:05.480 "data_offset": 2048, 00:12:05.480 "data_size": 63488 00:12:05.480 }, 00:12:05.480 { 00:12:05.480 "name": "BaseBdev4", 00:12:05.480 "uuid": "4a079662-cdeb-4199-b6ff-b222eb259504", 00:12:05.480 "is_configured": true, 00:12:05.480 "data_offset": 2048, 00:12:05.480 "data_size": 63488 00:12:05.480 } 00:12:05.480 ] 00:12:05.480 } 00:12:05.480 } 00:12:05.480 }' 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.480 BaseBdev2 00:12:05.480 BaseBdev3 00:12:05.480 BaseBdev4' 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.480 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.480 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.740 [2024-10-11 09:45:50.164268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.740 [2024-10-11 09:45:50.164310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.740 [2024-10-11 09:45:50.164370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.740 "name": "Existed_Raid", 00:12:05.740 "uuid": "dbece90b-c6a4-4210-96b9-d829d264edd2", 00:12:05.740 "strip_size_kb": 64, 00:12:05.740 "state": "offline", 00:12:05.740 "raid_level": "raid0", 00:12:05.740 "superblock": true, 00:12:05.740 "num_base_bdevs": 4, 00:12:05.740 "num_base_bdevs_discovered": 3, 00:12:05.740 "num_base_bdevs_operational": 3, 00:12:05.740 "base_bdevs_list": [ 00:12:05.740 { 00:12:05.740 "name": null, 00:12:05.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.740 "is_configured": false, 00:12:05.740 "data_offset": 0, 00:12:05.740 "data_size": 63488 00:12:05.740 }, 00:12:05.740 { 00:12:05.740 "name": "BaseBdev2", 00:12:05.740 "uuid": "fcd9ece0-6de8-4420-a746-7de37a7fd596", 00:12:05.740 "is_configured": true, 00:12:05.740 "data_offset": 2048, 00:12:05.740 "data_size": 63488 00:12:05.740 }, 00:12:05.740 { 00:12:05.740 "name": "BaseBdev3", 00:12:05.740 "uuid": "67a09dd1-2e6e-474c-a2d8-bde5ac3536e4", 00:12:05.740 "is_configured": true, 00:12:05.740 "data_offset": 2048, 00:12:05.740 "data_size": 63488 00:12:05.740 }, 00:12:05.740 { 00:12:05.740 "name": "BaseBdev4", 00:12:05.740 "uuid": "4a079662-cdeb-4199-b6ff-b222eb259504", 00:12:05.740 "is_configured": true, 00:12:05.740 "data_offset": 2048, 00:12:05.740 "data_size": 63488 00:12:05.740 } 00:12:05.740 ] 00:12:05.740 }' 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.740 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.309 [2024-10-11 09:45:50.824360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.309 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.569 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.569 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.569 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.569 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.569 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.569 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.569 [2024-10-11 09:45:50.987023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.569 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.569 [2024-10-11 09:45:51.150833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:06.569 [2024-10-11 09:45:51.150983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.828 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 BaseBdev2 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 [ 00:12:06.829 { 00:12:06.829 "name": "BaseBdev2", 00:12:06.829 "aliases": [ 00:12:06.829 "6e068c48-b685-4dfb-97f6-a2dcdc768f6e" 00:12:06.829 ], 00:12:06.829 "product_name": "Malloc disk", 00:12:06.829 "block_size": 512, 00:12:06.829 "num_blocks": 65536, 00:12:06.829 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:06.829 "assigned_rate_limits": { 00:12:06.829 "rw_ios_per_sec": 0, 00:12:06.829 "rw_mbytes_per_sec": 0, 00:12:06.829 "r_mbytes_per_sec": 0, 00:12:06.829 "w_mbytes_per_sec": 0 00:12:06.829 }, 00:12:06.829 "claimed": false, 00:12:06.829 "zoned": false, 00:12:06.829 "supported_io_types": { 00:12:06.829 "read": true, 00:12:06.829 "write": true, 00:12:06.829 "unmap": true, 00:12:06.829 "flush": true, 00:12:06.829 "reset": true, 00:12:06.829 "nvme_admin": false, 00:12:06.829 "nvme_io": false, 00:12:06.829 "nvme_io_md": false, 00:12:06.829 "write_zeroes": true, 00:12:06.829 "zcopy": true, 00:12:06.829 "get_zone_info": false, 00:12:06.829 "zone_management": false, 00:12:06.829 "zone_append": false, 00:12:06.829 "compare": false, 00:12:06.829 "compare_and_write": false, 00:12:06.829 "abort": true, 00:12:06.829 "seek_hole": false, 00:12:06.829 "seek_data": false, 00:12:06.829 "copy": true, 00:12:06.829 "nvme_iov_md": false 00:12:06.829 }, 00:12:06.829 "memory_domains": [ 00:12:06.829 { 00:12:06.829 "dma_device_id": "system", 00:12:06.829 "dma_device_type": 1 00:12:06.829 }, 00:12:06.829 { 00:12:06.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.829 "dma_device_type": 2 00:12:06.829 } 00:12:06.829 ], 00:12:06.829 "driver_specific": {} 00:12:06.829 } 00:12:06.829 ] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 BaseBdev3 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.114 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.114 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.114 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.114 [ 00:12:07.114 { 00:12:07.114 "name": "BaseBdev3", 00:12:07.114 "aliases": [ 00:12:07.114 "8e2ac4e8-24dc-413e-9340-8179a745e785" 00:12:07.114 ], 00:12:07.114 "product_name": "Malloc disk", 00:12:07.114 "block_size": 512, 00:12:07.114 "num_blocks": 65536, 00:12:07.114 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:07.114 "assigned_rate_limits": { 00:12:07.114 "rw_ios_per_sec": 0, 00:12:07.114 "rw_mbytes_per_sec": 0, 00:12:07.114 "r_mbytes_per_sec": 0, 00:12:07.114 "w_mbytes_per_sec": 0 00:12:07.114 }, 00:12:07.114 "claimed": false, 00:12:07.114 "zoned": false, 00:12:07.114 "supported_io_types": { 00:12:07.114 "read": true, 00:12:07.114 "write": true, 00:12:07.114 "unmap": true, 00:12:07.114 "flush": true, 00:12:07.114 "reset": true, 00:12:07.114 "nvme_admin": false, 00:12:07.114 "nvme_io": false, 00:12:07.114 "nvme_io_md": false, 00:12:07.114 "write_zeroes": true, 00:12:07.114 "zcopy": true, 00:12:07.114 "get_zone_info": false, 00:12:07.114 "zone_management": false, 00:12:07.114 "zone_append": false, 00:12:07.114 "compare": false, 00:12:07.114 "compare_and_write": false, 00:12:07.114 "abort": true, 00:12:07.114 "seek_hole": false, 00:12:07.114 "seek_data": false, 00:12:07.114 "copy": true, 00:12:07.114 "nvme_iov_md": false 00:12:07.114 }, 00:12:07.114 "memory_domains": [ 00:12:07.114 { 00:12:07.114 "dma_device_id": "system", 00:12:07.114 "dma_device_type": 1 00:12:07.114 }, 00:12:07.114 { 00:12:07.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.114 "dma_device_type": 2 00:12:07.114 } 00:12:07.114 ], 00:12:07.114 "driver_specific": {} 00:12:07.114 } 00:12:07.114 ] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 BaseBdev4 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 [ 00:12:07.115 { 00:12:07.115 "name": "BaseBdev4", 00:12:07.115 "aliases": [ 00:12:07.115 "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184" 00:12:07.115 ], 00:12:07.115 "product_name": "Malloc disk", 00:12:07.115 "block_size": 512, 00:12:07.115 "num_blocks": 65536, 00:12:07.115 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:07.115 "assigned_rate_limits": { 00:12:07.115 "rw_ios_per_sec": 0, 00:12:07.115 "rw_mbytes_per_sec": 0, 00:12:07.115 "r_mbytes_per_sec": 0, 00:12:07.115 "w_mbytes_per_sec": 0 00:12:07.115 }, 00:12:07.115 "claimed": false, 00:12:07.115 "zoned": false, 00:12:07.115 "supported_io_types": { 00:12:07.115 "read": true, 00:12:07.115 "write": true, 00:12:07.115 "unmap": true, 00:12:07.115 "flush": true, 00:12:07.115 "reset": true, 00:12:07.115 "nvme_admin": false, 00:12:07.115 "nvme_io": false, 00:12:07.115 "nvme_io_md": false, 00:12:07.115 "write_zeroes": true, 00:12:07.115 "zcopy": true, 00:12:07.115 "get_zone_info": false, 00:12:07.115 "zone_management": false, 00:12:07.115 "zone_append": false, 00:12:07.115 "compare": false, 00:12:07.115 "compare_and_write": false, 00:12:07.115 "abort": true, 00:12:07.115 "seek_hole": false, 00:12:07.115 "seek_data": false, 00:12:07.115 "copy": true, 00:12:07.115 "nvme_iov_md": false 00:12:07.115 }, 00:12:07.115 "memory_domains": [ 00:12:07.115 { 00:12:07.115 "dma_device_id": "system", 00:12:07.115 "dma_device_type": 1 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.115 "dma_device_type": 2 00:12:07.115 } 00:12:07.115 ], 00:12:07.115 "driver_specific": {} 00:12:07.115 } 00:12:07.115 ] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 [2024-10-11 09:45:51.584951] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.115 [2024-10-11 09:45:51.585100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.115 [2024-10-11 09:45:51.585134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.115 [2024-10-11 09:45:51.587104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.115 [2024-10-11 09:45:51.587165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.115 "name": "Existed_Raid", 00:12:07.115 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:07.115 "strip_size_kb": 64, 00:12:07.115 "state": "configuring", 00:12:07.115 "raid_level": "raid0", 00:12:07.115 "superblock": true, 00:12:07.115 "num_base_bdevs": 4, 00:12:07.115 "num_base_bdevs_discovered": 3, 00:12:07.115 "num_base_bdevs_operational": 4, 00:12:07.115 "base_bdevs_list": [ 00:12:07.115 { 00:12:07.115 "name": "BaseBdev1", 00:12:07.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.115 "is_configured": false, 00:12:07.115 "data_offset": 0, 00:12:07.115 "data_size": 0 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "name": "BaseBdev2", 00:12:07.115 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:07.115 "is_configured": true, 00:12:07.115 "data_offset": 2048, 00:12:07.115 "data_size": 63488 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "name": "BaseBdev3", 00:12:07.115 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:07.115 "is_configured": true, 00:12:07.115 "data_offset": 2048, 00:12:07.115 "data_size": 63488 00:12:07.115 }, 00:12:07.115 { 00:12:07.115 "name": "BaseBdev4", 00:12:07.115 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:07.115 "is_configured": true, 00:12:07.115 "data_offset": 2048, 00:12:07.115 "data_size": 63488 00:12:07.115 } 00:12:07.115 ] 00:12:07.115 }' 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.115 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.692 [2024-10-11 09:45:52.060113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.692 "name": "Existed_Raid", 00:12:07.692 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:07.692 "strip_size_kb": 64, 00:12:07.692 "state": "configuring", 00:12:07.692 "raid_level": "raid0", 00:12:07.692 "superblock": true, 00:12:07.692 "num_base_bdevs": 4, 00:12:07.692 "num_base_bdevs_discovered": 2, 00:12:07.692 "num_base_bdevs_operational": 4, 00:12:07.692 "base_bdevs_list": [ 00:12:07.692 { 00:12:07.692 "name": "BaseBdev1", 00:12:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.692 "is_configured": false, 00:12:07.692 "data_offset": 0, 00:12:07.692 "data_size": 0 00:12:07.692 }, 00:12:07.692 { 00:12:07.692 "name": null, 00:12:07.692 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:07.692 "is_configured": false, 00:12:07.692 "data_offset": 0, 00:12:07.692 "data_size": 63488 00:12:07.692 }, 00:12:07.692 { 00:12:07.692 "name": "BaseBdev3", 00:12:07.692 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:07.692 "is_configured": true, 00:12:07.692 "data_offset": 2048, 00:12:07.692 "data_size": 63488 00:12:07.692 }, 00:12:07.692 { 00:12:07.692 "name": "BaseBdev4", 00:12:07.692 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:07.692 "is_configured": true, 00:12:07.692 "data_offset": 2048, 00:12:07.692 "data_size": 63488 00:12:07.692 } 00:12:07.692 ] 00:12:07.692 }' 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.692 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.951 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.951 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.951 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.951 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.951 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.211 [2024-10-11 09:45:52.638592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.211 BaseBdev1 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.211 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.212 [ 00:12:08.212 { 00:12:08.212 "name": "BaseBdev1", 00:12:08.212 "aliases": [ 00:12:08.212 "ff7f107b-ab9e-425a-bb1c-e7669aacfa82" 00:12:08.212 ], 00:12:08.212 "product_name": "Malloc disk", 00:12:08.212 "block_size": 512, 00:12:08.212 "num_blocks": 65536, 00:12:08.212 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:08.212 "assigned_rate_limits": { 00:12:08.212 "rw_ios_per_sec": 0, 00:12:08.212 "rw_mbytes_per_sec": 0, 00:12:08.212 "r_mbytes_per_sec": 0, 00:12:08.212 "w_mbytes_per_sec": 0 00:12:08.212 }, 00:12:08.212 "claimed": true, 00:12:08.212 "claim_type": "exclusive_write", 00:12:08.212 "zoned": false, 00:12:08.212 "supported_io_types": { 00:12:08.212 "read": true, 00:12:08.212 "write": true, 00:12:08.212 "unmap": true, 00:12:08.212 "flush": true, 00:12:08.212 "reset": true, 00:12:08.212 "nvme_admin": false, 00:12:08.212 "nvme_io": false, 00:12:08.212 "nvme_io_md": false, 00:12:08.212 "write_zeroes": true, 00:12:08.212 "zcopy": true, 00:12:08.212 "get_zone_info": false, 00:12:08.212 "zone_management": false, 00:12:08.212 "zone_append": false, 00:12:08.212 "compare": false, 00:12:08.212 "compare_and_write": false, 00:12:08.212 "abort": true, 00:12:08.212 "seek_hole": false, 00:12:08.212 "seek_data": false, 00:12:08.212 "copy": true, 00:12:08.212 "nvme_iov_md": false 00:12:08.212 }, 00:12:08.212 "memory_domains": [ 00:12:08.212 { 00:12:08.212 "dma_device_id": "system", 00:12:08.212 "dma_device_type": 1 00:12:08.212 }, 00:12:08.212 { 00:12:08.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.212 "dma_device_type": 2 00:12:08.212 } 00:12:08.212 ], 00:12:08.212 "driver_specific": {} 00:12:08.212 } 00:12:08.212 ] 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.212 "name": "Existed_Raid", 00:12:08.212 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:08.212 "strip_size_kb": 64, 00:12:08.212 "state": "configuring", 00:12:08.212 "raid_level": "raid0", 00:12:08.212 "superblock": true, 00:12:08.212 "num_base_bdevs": 4, 00:12:08.212 "num_base_bdevs_discovered": 3, 00:12:08.212 "num_base_bdevs_operational": 4, 00:12:08.212 "base_bdevs_list": [ 00:12:08.212 { 00:12:08.212 "name": "BaseBdev1", 00:12:08.212 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:08.212 "is_configured": true, 00:12:08.212 "data_offset": 2048, 00:12:08.212 "data_size": 63488 00:12:08.212 }, 00:12:08.212 { 00:12:08.212 "name": null, 00:12:08.212 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:08.212 "is_configured": false, 00:12:08.212 "data_offset": 0, 00:12:08.212 "data_size": 63488 00:12:08.212 }, 00:12:08.212 { 00:12:08.212 "name": "BaseBdev3", 00:12:08.212 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:08.212 "is_configured": true, 00:12:08.212 "data_offset": 2048, 00:12:08.212 "data_size": 63488 00:12:08.212 }, 00:12:08.212 { 00:12:08.212 "name": "BaseBdev4", 00:12:08.212 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:08.212 "is_configured": true, 00:12:08.212 "data_offset": 2048, 00:12:08.212 "data_size": 63488 00:12:08.212 } 00:12:08.212 ] 00:12:08.212 }' 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.212 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 [2024-10-11 09:45:53.161824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.780 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.781 "name": "Existed_Raid", 00:12:08.781 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:08.781 "strip_size_kb": 64, 00:12:08.781 "state": "configuring", 00:12:08.781 "raid_level": "raid0", 00:12:08.781 "superblock": true, 00:12:08.781 "num_base_bdevs": 4, 00:12:08.781 "num_base_bdevs_discovered": 2, 00:12:08.781 "num_base_bdevs_operational": 4, 00:12:08.781 "base_bdevs_list": [ 00:12:08.781 { 00:12:08.781 "name": "BaseBdev1", 00:12:08.781 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:08.781 "is_configured": true, 00:12:08.781 "data_offset": 2048, 00:12:08.781 "data_size": 63488 00:12:08.781 }, 00:12:08.781 { 00:12:08.781 "name": null, 00:12:08.781 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:08.781 "is_configured": false, 00:12:08.781 "data_offset": 0, 00:12:08.781 "data_size": 63488 00:12:08.781 }, 00:12:08.781 { 00:12:08.781 "name": null, 00:12:08.781 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:08.781 "is_configured": false, 00:12:08.781 "data_offset": 0, 00:12:08.781 "data_size": 63488 00:12:08.781 }, 00:12:08.781 { 00:12:08.781 "name": "BaseBdev4", 00:12:08.781 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:08.781 "is_configured": true, 00:12:08.781 "data_offset": 2048, 00:12:08.781 "data_size": 63488 00:12:08.781 } 00:12:08.781 ] 00:12:08.781 }' 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.781 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.299 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.299 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:09.299 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:09.299 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.300 [2024-10-11 09:45:53.696943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.300 "name": "Existed_Raid", 00:12:09.300 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:09.300 "strip_size_kb": 64, 00:12:09.300 "state": "configuring", 00:12:09.300 "raid_level": "raid0", 00:12:09.300 "superblock": true, 00:12:09.300 "num_base_bdevs": 4, 00:12:09.300 "num_base_bdevs_discovered": 3, 00:12:09.300 "num_base_bdevs_operational": 4, 00:12:09.300 "base_bdevs_list": [ 00:12:09.300 { 00:12:09.300 "name": "BaseBdev1", 00:12:09.300 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:09.300 "is_configured": true, 00:12:09.300 "data_offset": 2048, 00:12:09.300 "data_size": 63488 00:12:09.300 }, 00:12:09.300 { 00:12:09.300 "name": null, 00:12:09.300 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:09.300 "is_configured": false, 00:12:09.300 "data_offset": 0, 00:12:09.300 "data_size": 63488 00:12:09.300 }, 00:12:09.300 { 00:12:09.300 "name": "BaseBdev3", 00:12:09.300 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:09.300 "is_configured": true, 00:12:09.300 "data_offset": 2048, 00:12:09.300 "data_size": 63488 00:12:09.300 }, 00:12:09.300 { 00:12:09.300 "name": "BaseBdev4", 00:12:09.300 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:09.300 "is_configured": true, 00:12:09.300 "data_offset": 2048, 00:12:09.300 "data_size": 63488 00:12:09.300 } 00:12:09.300 ] 00:12:09.300 }' 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.300 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.869 [2024-10-11 09:45:54.248076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.869 "name": "Existed_Raid", 00:12:09.869 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:09.869 "strip_size_kb": 64, 00:12:09.869 "state": "configuring", 00:12:09.869 "raid_level": "raid0", 00:12:09.869 "superblock": true, 00:12:09.869 "num_base_bdevs": 4, 00:12:09.869 "num_base_bdevs_discovered": 2, 00:12:09.869 "num_base_bdevs_operational": 4, 00:12:09.869 "base_bdevs_list": [ 00:12:09.869 { 00:12:09.869 "name": null, 00:12:09.869 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:09.869 "is_configured": false, 00:12:09.869 "data_offset": 0, 00:12:09.869 "data_size": 63488 00:12:09.869 }, 00:12:09.869 { 00:12:09.869 "name": null, 00:12:09.869 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:09.869 "is_configured": false, 00:12:09.869 "data_offset": 0, 00:12:09.869 "data_size": 63488 00:12:09.869 }, 00:12:09.869 { 00:12:09.869 "name": "BaseBdev3", 00:12:09.869 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:09.869 "is_configured": true, 00:12:09.869 "data_offset": 2048, 00:12:09.869 "data_size": 63488 00:12:09.869 }, 00:12:09.869 { 00:12:09.869 "name": "BaseBdev4", 00:12:09.869 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:09.869 "is_configured": true, 00:12:09.869 "data_offset": 2048, 00:12:09.869 "data_size": 63488 00:12:09.869 } 00:12:09.869 ] 00:12:09.869 }' 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.869 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.437 [2024-10-11 09:45:54.915240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.437 "name": "Existed_Raid", 00:12:10.437 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:10.437 "strip_size_kb": 64, 00:12:10.437 "state": "configuring", 00:12:10.437 "raid_level": "raid0", 00:12:10.437 "superblock": true, 00:12:10.437 "num_base_bdevs": 4, 00:12:10.437 "num_base_bdevs_discovered": 3, 00:12:10.437 "num_base_bdevs_operational": 4, 00:12:10.437 "base_bdevs_list": [ 00:12:10.437 { 00:12:10.437 "name": null, 00:12:10.437 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:10.437 "is_configured": false, 00:12:10.437 "data_offset": 0, 00:12:10.437 "data_size": 63488 00:12:10.437 }, 00:12:10.437 { 00:12:10.437 "name": "BaseBdev2", 00:12:10.437 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:10.437 "is_configured": true, 00:12:10.437 "data_offset": 2048, 00:12:10.437 "data_size": 63488 00:12:10.437 }, 00:12:10.437 { 00:12:10.437 "name": "BaseBdev3", 00:12:10.437 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:10.437 "is_configured": true, 00:12:10.437 "data_offset": 2048, 00:12:10.437 "data_size": 63488 00:12:10.437 }, 00:12:10.437 { 00:12:10.437 "name": "BaseBdev4", 00:12:10.437 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:10.437 "is_configured": true, 00:12:10.437 "data_offset": 2048, 00:12:10.437 "data_size": 63488 00:12:10.437 } 00:12:10.437 ] 00:12:10.437 }' 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.437 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff7f107b-ab9e-425a-bb1c-e7669aacfa82 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.005 [2024-10-11 09:45:55.544890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:11.005 [2024-10-11 09:45:55.545266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:11.005 [2024-10-11 09:45:55.545283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.005 [2024-10-11 09:45:55.545561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:11.005 [2024-10-11 09:45:55.545708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:11.005 [2024-10-11 09:45:55.545721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:11.005 NewBaseBdev 00:12:11.005 [2024-10-11 09:45:55.545865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.005 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.006 [ 00:12:11.006 { 00:12:11.006 "name": "NewBaseBdev", 00:12:11.006 "aliases": [ 00:12:11.006 "ff7f107b-ab9e-425a-bb1c-e7669aacfa82" 00:12:11.006 ], 00:12:11.006 "product_name": "Malloc disk", 00:12:11.006 "block_size": 512, 00:12:11.006 "num_blocks": 65536, 00:12:11.006 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:11.006 "assigned_rate_limits": { 00:12:11.006 "rw_ios_per_sec": 0, 00:12:11.006 "rw_mbytes_per_sec": 0, 00:12:11.006 "r_mbytes_per_sec": 0, 00:12:11.006 "w_mbytes_per_sec": 0 00:12:11.006 }, 00:12:11.006 "claimed": true, 00:12:11.006 "claim_type": "exclusive_write", 00:12:11.006 "zoned": false, 00:12:11.006 "supported_io_types": { 00:12:11.006 "read": true, 00:12:11.006 "write": true, 00:12:11.006 "unmap": true, 00:12:11.006 "flush": true, 00:12:11.006 "reset": true, 00:12:11.006 "nvme_admin": false, 00:12:11.006 "nvme_io": false, 00:12:11.006 "nvme_io_md": false, 00:12:11.006 "write_zeroes": true, 00:12:11.006 "zcopy": true, 00:12:11.006 "get_zone_info": false, 00:12:11.006 "zone_management": false, 00:12:11.006 "zone_append": false, 00:12:11.006 "compare": false, 00:12:11.006 "compare_and_write": false, 00:12:11.006 "abort": true, 00:12:11.006 "seek_hole": false, 00:12:11.006 "seek_data": false, 00:12:11.006 "copy": true, 00:12:11.006 "nvme_iov_md": false 00:12:11.006 }, 00:12:11.006 "memory_domains": [ 00:12:11.006 { 00:12:11.006 "dma_device_id": "system", 00:12:11.006 "dma_device_type": 1 00:12:11.006 }, 00:12:11.006 { 00:12:11.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.006 "dma_device_type": 2 00:12:11.006 } 00:12:11.006 ], 00:12:11.006 "driver_specific": {} 00:12:11.006 } 00:12:11.006 ] 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.006 "name": "Existed_Raid", 00:12:11.006 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:11.006 "strip_size_kb": 64, 00:12:11.006 "state": "online", 00:12:11.006 "raid_level": "raid0", 00:12:11.006 "superblock": true, 00:12:11.006 "num_base_bdevs": 4, 00:12:11.006 "num_base_bdevs_discovered": 4, 00:12:11.006 "num_base_bdevs_operational": 4, 00:12:11.006 "base_bdevs_list": [ 00:12:11.006 { 00:12:11.006 "name": "NewBaseBdev", 00:12:11.006 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:11.006 "is_configured": true, 00:12:11.006 "data_offset": 2048, 00:12:11.006 "data_size": 63488 00:12:11.006 }, 00:12:11.006 { 00:12:11.006 "name": "BaseBdev2", 00:12:11.006 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:11.006 "is_configured": true, 00:12:11.006 "data_offset": 2048, 00:12:11.006 "data_size": 63488 00:12:11.006 }, 00:12:11.006 { 00:12:11.006 "name": "BaseBdev3", 00:12:11.006 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:11.006 "is_configured": true, 00:12:11.006 "data_offset": 2048, 00:12:11.006 "data_size": 63488 00:12:11.006 }, 00:12:11.006 { 00:12:11.006 "name": "BaseBdev4", 00:12:11.006 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:11.006 "is_configured": true, 00:12:11.006 "data_offset": 2048, 00:12:11.006 "data_size": 63488 00:12:11.006 } 00:12:11.006 ] 00:12:11.006 }' 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.006 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.572 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.573 [2024-10-11 09:45:56.052553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.573 "name": "Existed_Raid", 00:12:11.573 "aliases": [ 00:12:11.573 "1a585c90-45cf-467a-a3f8-d81bf8fc2267" 00:12:11.573 ], 00:12:11.573 "product_name": "Raid Volume", 00:12:11.573 "block_size": 512, 00:12:11.573 "num_blocks": 253952, 00:12:11.573 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:11.573 "assigned_rate_limits": { 00:12:11.573 "rw_ios_per_sec": 0, 00:12:11.573 "rw_mbytes_per_sec": 0, 00:12:11.573 "r_mbytes_per_sec": 0, 00:12:11.573 "w_mbytes_per_sec": 0 00:12:11.573 }, 00:12:11.573 "claimed": false, 00:12:11.573 "zoned": false, 00:12:11.573 "supported_io_types": { 00:12:11.573 "read": true, 00:12:11.573 "write": true, 00:12:11.573 "unmap": true, 00:12:11.573 "flush": true, 00:12:11.573 "reset": true, 00:12:11.573 "nvme_admin": false, 00:12:11.573 "nvme_io": false, 00:12:11.573 "nvme_io_md": false, 00:12:11.573 "write_zeroes": true, 00:12:11.573 "zcopy": false, 00:12:11.573 "get_zone_info": false, 00:12:11.573 "zone_management": false, 00:12:11.573 "zone_append": false, 00:12:11.573 "compare": false, 00:12:11.573 "compare_and_write": false, 00:12:11.573 "abort": false, 00:12:11.573 "seek_hole": false, 00:12:11.573 "seek_data": false, 00:12:11.573 "copy": false, 00:12:11.573 "nvme_iov_md": false 00:12:11.573 }, 00:12:11.573 "memory_domains": [ 00:12:11.573 { 00:12:11.573 "dma_device_id": "system", 00:12:11.573 "dma_device_type": 1 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.573 "dma_device_type": 2 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "system", 00:12:11.573 "dma_device_type": 1 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.573 "dma_device_type": 2 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "system", 00:12:11.573 "dma_device_type": 1 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.573 "dma_device_type": 2 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "system", 00:12:11.573 "dma_device_type": 1 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.573 "dma_device_type": 2 00:12:11.573 } 00:12:11.573 ], 00:12:11.573 "driver_specific": { 00:12:11.573 "raid": { 00:12:11.573 "uuid": "1a585c90-45cf-467a-a3f8-d81bf8fc2267", 00:12:11.573 "strip_size_kb": 64, 00:12:11.573 "state": "online", 00:12:11.573 "raid_level": "raid0", 00:12:11.573 "superblock": true, 00:12:11.573 "num_base_bdevs": 4, 00:12:11.573 "num_base_bdevs_discovered": 4, 00:12:11.573 "num_base_bdevs_operational": 4, 00:12:11.573 "base_bdevs_list": [ 00:12:11.573 { 00:12:11.573 "name": "NewBaseBdev", 00:12:11.573 "uuid": "ff7f107b-ab9e-425a-bb1c-e7669aacfa82", 00:12:11.573 "is_configured": true, 00:12:11.573 "data_offset": 2048, 00:12:11.573 "data_size": 63488 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "name": "BaseBdev2", 00:12:11.573 "uuid": "6e068c48-b685-4dfb-97f6-a2dcdc768f6e", 00:12:11.573 "is_configured": true, 00:12:11.573 "data_offset": 2048, 00:12:11.573 "data_size": 63488 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "name": "BaseBdev3", 00:12:11.573 "uuid": "8e2ac4e8-24dc-413e-9340-8179a745e785", 00:12:11.573 "is_configured": true, 00:12:11.573 "data_offset": 2048, 00:12:11.573 "data_size": 63488 00:12:11.573 }, 00:12:11.573 { 00:12:11.573 "name": "BaseBdev4", 00:12:11.573 "uuid": "a9dbf3ad-cbf7-49b3-a30b-b3c553b90184", 00:12:11.573 "is_configured": true, 00:12:11.573 "data_offset": 2048, 00:12:11.573 "data_size": 63488 00:12:11.573 } 00:12:11.573 ] 00:12:11.573 } 00:12:11.573 } 00:12:11.573 }' 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:11.573 BaseBdev2 00:12:11.573 BaseBdev3 00:12:11.573 BaseBdev4' 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.573 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.832 [2024-10-11 09:45:56.387664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.832 [2024-10-11 09:45:56.387749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.832 [2024-10-11 09:45:56.387872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.832 [2024-10-11 09:45:56.387952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.832 [2024-10-11 09:45:56.387965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70530 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70530 ']' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70530 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70530 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70530' 00:12:11.832 killing process with pid 70530 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70530 00:12:11.832 [2024-10-11 09:45:56.436796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.832 09:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70530 00:12:12.400 [2024-10-11 09:45:56.875326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.781 09:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:13.781 00:12:13.781 real 0m12.415s 00:12:13.781 user 0m19.614s 00:12:13.781 sys 0m2.264s 00:12:13.781 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.781 ************************************ 00:12:13.781 END TEST raid_state_function_test_sb 00:12:13.781 ************************************ 00:12:13.781 09:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.781 09:45:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:13.781 09:45:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:13.781 09:45:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.781 09:45:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.781 ************************************ 00:12:13.781 START TEST raid_superblock_test 00:12:13.781 ************************************ 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71207 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71207 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71207 ']' 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.781 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.781 [2024-10-11 09:45:58.268532] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:13.781 [2024-10-11 09:45:58.268816] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71207 ] 00:12:14.041 [2024-10-11 09:45:58.442690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.041 [2024-10-11 09:45:58.570016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.328 [2024-10-11 09:45:58.792460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.328 [2024-10-11 09:45:58.792589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.598 malloc1 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.598 [2024-10-11 09:45:59.215563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:14.598 [2024-10-11 09:45:59.215659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.598 [2024-10-11 09:45:59.215691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.598 [2024-10-11 09:45:59.215702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.598 [2024-10-11 09:45:59.218063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.598 [2024-10-11 09:45:59.218111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:14.598 pt1 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.598 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.858 malloc2 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.858 [2024-10-11 09:45:59.272425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.858 [2024-10-11 09:45:59.272589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.858 [2024-10-11 09:45:59.272633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:14.858 [2024-10-11 09:45:59.272670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.858 [2024-10-11 09:45:59.274861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.858 [2024-10-11 09:45:59.274936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.858 pt2 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.858 malloc3 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.858 [2024-10-11 09:45:59.344833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.858 [2024-10-11 09:45:59.345005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.858 [2024-10-11 09:45:59.345046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:14.858 [2024-10-11 09:45:59.345076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.858 [2024-10-11 09:45:59.347295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.858 [2024-10-11 09:45:59.347384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.858 pt3 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.858 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 malloc4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 [2024-10-11 09:45:59.406332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:14.859 [2024-10-11 09:45:59.406400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.859 [2024-10-11 09:45:59.406419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:14.859 [2024-10-11 09:45:59.406428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.859 [2024-10-11 09:45:59.408544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.859 [2024-10-11 09:45:59.408586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:14.859 pt4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 [2024-10-11 09:45:59.418362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:14.859 [2024-10-11 09:45:59.420213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.859 [2024-10-11 09:45:59.420283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.859 [2024-10-11 09:45:59.420351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:14.859 [2024-10-11 09:45:59.420560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:14.859 [2024-10-11 09:45:59.420573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:14.859 [2024-10-11 09:45:59.420909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:14.859 [2024-10-11 09:45:59.421084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:14.859 [2024-10-11 09:45:59.421103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:14.859 [2024-10-11 09:45:59.421258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.859 "name": "raid_bdev1", 00:12:14.859 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:14.859 "strip_size_kb": 64, 00:12:14.859 "state": "online", 00:12:14.859 "raid_level": "raid0", 00:12:14.859 "superblock": true, 00:12:14.859 "num_base_bdevs": 4, 00:12:14.859 "num_base_bdevs_discovered": 4, 00:12:14.859 "num_base_bdevs_operational": 4, 00:12:14.859 "base_bdevs_list": [ 00:12:14.859 { 00:12:14.859 "name": "pt1", 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.859 "is_configured": true, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 }, 00:12:14.859 { 00:12:14.859 "name": "pt2", 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.859 "is_configured": true, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 }, 00:12:14.859 { 00:12:14.859 "name": "pt3", 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.859 "is_configured": true, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 }, 00:12:14.859 { 00:12:14.859 "name": "pt4", 00:12:14.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.859 "is_configured": true, 00:12:14.859 "data_offset": 2048, 00:12:14.859 "data_size": 63488 00:12:14.859 } 00:12:14.859 ] 00:12:14.859 }' 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.859 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.429 [2024-10-11 09:45:59.901855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.429 "name": "raid_bdev1", 00:12:15.429 "aliases": [ 00:12:15.429 "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1" 00:12:15.429 ], 00:12:15.429 "product_name": "Raid Volume", 00:12:15.429 "block_size": 512, 00:12:15.429 "num_blocks": 253952, 00:12:15.429 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:15.429 "assigned_rate_limits": { 00:12:15.429 "rw_ios_per_sec": 0, 00:12:15.429 "rw_mbytes_per_sec": 0, 00:12:15.429 "r_mbytes_per_sec": 0, 00:12:15.429 "w_mbytes_per_sec": 0 00:12:15.429 }, 00:12:15.429 "claimed": false, 00:12:15.429 "zoned": false, 00:12:15.429 "supported_io_types": { 00:12:15.429 "read": true, 00:12:15.429 "write": true, 00:12:15.429 "unmap": true, 00:12:15.429 "flush": true, 00:12:15.429 "reset": true, 00:12:15.429 "nvme_admin": false, 00:12:15.429 "nvme_io": false, 00:12:15.429 "nvme_io_md": false, 00:12:15.429 "write_zeroes": true, 00:12:15.429 "zcopy": false, 00:12:15.429 "get_zone_info": false, 00:12:15.429 "zone_management": false, 00:12:15.429 "zone_append": false, 00:12:15.429 "compare": false, 00:12:15.429 "compare_and_write": false, 00:12:15.429 "abort": false, 00:12:15.429 "seek_hole": false, 00:12:15.429 "seek_data": false, 00:12:15.429 "copy": false, 00:12:15.429 "nvme_iov_md": false 00:12:15.429 }, 00:12:15.429 "memory_domains": [ 00:12:15.429 { 00:12:15.429 "dma_device_id": "system", 00:12:15.429 "dma_device_type": 1 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.429 "dma_device_type": 2 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "system", 00:12:15.429 "dma_device_type": 1 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.429 "dma_device_type": 2 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "system", 00:12:15.429 "dma_device_type": 1 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.429 "dma_device_type": 2 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "system", 00:12:15.429 "dma_device_type": 1 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.429 "dma_device_type": 2 00:12:15.429 } 00:12:15.429 ], 00:12:15.429 "driver_specific": { 00:12:15.429 "raid": { 00:12:15.429 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:15.429 "strip_size_kb": 64, 00:12:15.429 "state": "online", 00:12:15.429 "raid_level": "raid0", 00:12:15.429 "superblock": true, 00:12:15.429 "num_base_bdevs": 4, 00:12:15.429 "num_base_bdevs_discovered": 4, 00:12:15.429 "num_base_bdevs_operational": 4, 00:12:15.429 "base_bdevs_list": [ 00:12:15.429 { 00:12:15.429 "name": "pt1", 00:12:15.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.429 "is_configured": true, 00:12:15.429 "data_offset": 2048, 00:12:15.429 "data_size": 63488 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "name": "pt2", 00:12:15.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.429 "is_configured": true, 00:12:15.429 "data_offset": 2048, 00:12:15.429 "data_size": 63488 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "name": "pt3", 00:12:15.429 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.429 "is_configured": true, 00:12:15.429 "data_offset": 2048, 00:12:15.429 "data_size": 63488 00:12:15.429 }, 00:12:15.429 { 00:12:15.429 "name": "pt4", 00:12:15.429 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.429 "is_configured": true, 00:12:15.429 "data_offset": 2048, 00:12:15.429 "data_size": 63488 00:12:15.429 } 00:12:15.429 ] 00:12:15.429 } 00:12:15.429 } 00:12:15.429 }' 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:15.429 pt2 00:12:15.429 pt3 00:12:15.429 pt4' 00:12:15.429 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.429 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.429 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.429 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:15.429 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.429 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.430 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:15.689 [2024-10-11 09:46:00.257225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.689 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7a7c033b-c2cf-48da-b1bb-c0852eacc5c1 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7a7c033b-c2cf-48da-b1bb-c0852eacc5c1 ']' 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.690 [2024-10-11 09:46:00.304846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.690 [2024-10-11 09:46:00.304884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.690 [2024-10-11 09:46:00.304984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.690 [2024-10-11 09:46:00.305061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.690 [2024-10-11 09:46:00.305077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.690 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 [2024-10-11 09:46:00.480580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:15.950 [2024-10-11 09:46:00.482456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:15.950 [2024-10-11 09:46:00.482545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:15.950 [2024-10-11 09:46:00.482597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:15.950 [2024-10-11 09:46:00.482673] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:15.950 [2024-10-11 09:46:00.482771] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:15.950 [2024-10-11 09:46:00.482859] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:15.950 [2024-10-11 09:46:00.482911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:15.950 [2024-10-11 09:46:00.482965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.950 [2024-10-11 09:46:00.483007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:15.950 request: 00:12:15.950 { 00:12:15.950 "name": "raid_bdev1", 00:12:15.950 "raid_level": "raid0", 00:12:15.950 "base_bdevs": [ 00:12:15.950 "malloc1", 00:12:15.950 "malloc2", 00:12:15.950 "malloc3", 00:12:15.950 "malloc4" 00:12:15.950 ], 00:12:15.950 "strip_size_kb": 64, 00:12:15.950 "superblock": false, 00:12:15.950 "method": "bdev_raid_create", 00:12:15.950 "req_id": 1 00:12:15.950 } 00:12:15.950 Got JSON-RPC error response 00:12:15.950 response: 00:12:15.950 { 00:12:15.950 "code": -17, 00:12:15.950 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:15.950 } 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:15.950 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.951 [2024-10-11 09:46:00.548409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.951 [2024-10-11 09:46:00.548543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.951 [2024-10-11 09:46:00.548565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:15.951 [2024-10-11 09:46:00.548577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.951 [2024-10-11 09:46:00.550705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.951 [2024-10-11 09:46:00.550758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.951 [2024-10-11 09:46:00.550842] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:15.951 [2024-10-11 09:46:00.550913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.951 pt1 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.951 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.210 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.210 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.210 "name": "raid_bdev1", 00:12:16.210 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:16.210 "strip_size_kb": 64, 00:12:16.210 "state": "configuring", 00:12:16.210 "raid_level": "raid0", 00:12:16.210 "superblock": true, 00:12:16.210 "num_base_bdevs": 4, 00:12:16.210 "num_base_bdevs_discovered": 1, 00:12:16.210 "num_base_bdevs_operational": 4, 00:12:16.210 "base_bdevs_list": [ 00:12:16.210 { 00:12:16.210 "name": "pt1", 00:12:16.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.210 "is_configured": true, 00:12:16.210 "data_offset": 2048, 00:12:16.210 "data_size": 63488 00:12:16.210 }, 00:12:16.210 { 00:12:16.210 "name": null, 00:12:16.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.210 "is_configured": false, 00:12:16.210 "data_offset": 2048, 00:12:16.210 "data_size": 63488 00:12:16.210 }, 00:12:16.210 { 00:12:16.210 "name": null, 00:12:16.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.210 "is_configured": false, 00:12:16.210 "data_offset": 2048, 00:12:16.210 "data_size": 63488 00:12:16.210 }, 00:12:16.210 { 00:12:16.210 "name": null, 00:12:16.210 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.210 "is_configured": false, 00:12:16.210 "data_offset": 2048, 00:12:16.210 "data_size": 63488 00:12:16.210 } 00:12:16.210 ] 00:12:16.210 }' 00:12:16.210 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.210 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.469 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:16.469 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:16.469 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.469 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.469 [2024-10-11 09:46:01.007754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:16.469 [2024-10-11 09:46:01.007934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.469 [2024-10-11 09:46:01.007979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:16.469 [2024-10-11 09:46:01.008021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.469 [2024-10-11 09:46:01.008617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.469 [2024-10-11 09:46:01.008690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:16.469 [2024-10-11 09:46:01.008841] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:16.469 [2024-10-11 09:46:01.008910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:16.469 pt2 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.469 [2024-10-11 09:46:01.019762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.469 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.470 "name": "raid_bdev1", 00:12:16.470 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:16.470 "strip_size_kb": 64, 00:12:16.470 "state": "configuring", 00:12:16.470 "raid_level": "raid0", 00:12:16.470 "superblock": true, 00:12:16.470 "num_base_bdevs": 4, 00:12:16.470 "num_base_bdevs_discovered": 1, 00:12:16.470 "num_base_bdevs_operational": 4, 00:12:16.470 "base_bdevs_list": [ 00:12:16.470 { 00:12:16.470 "name": "pt1", 00:12:16.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.470 "is_configured": true, 00:12:16.470 "data_offset": 2048, 00:12:16.470 "data_size": 63488 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "name": null, 00:12:16.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.470 "is_configured": false, 00:12:16.470 "data_offset": 0, 00:12:16.470 "data_size": 63488 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "name": null, 00:12:16.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.470 "is_configured": false, 00:12:16.470 "data_offset": 2048, 00:12:16.470 "data_size": 63488 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "name": null, 00:12:16.470 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.470 "is_configured": false, 00:12:16.470 "data_offset": 2048, 00:12:16.470 "data_size": 63488 00:12:16.470 } 00:12:16.470 ] 00:12:16.470 }' 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.470 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.037 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:17.037 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.037 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.037 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.037 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.037 [2024-10-11 09:46:01.478941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.037 [2024-10-11 09:46:01.479094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.037 [2024-10-11 09:46:01.479135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:17.037 [2024-10-11 09:46:01.479166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.037 [2024-10-11 09:46:01.479644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.038 [2024-10-11 09:46:01.479713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.038 [2024-10-11 09:46:01.479841] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.038 [2024-10-11 09:46:01.479892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.038 pt2 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.038 [2024-10-11 09:46:01.490891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.038 [2024-10-11 09:46:01.491001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.038 [2024-10-11 09:46:01.491044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:17.038 [2024-10-11 09:46:01.491075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.038 [2024-10-11 09:46:01.491491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.038 [2024-10-11 09:46:01.491546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.038 [2024-10-11 09:46:01.491651] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:17.038 [2024-10-11 09:46:01.491721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.038 pt3 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.038 [2024-10-11 09:46:01.502856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:17.038 [2024-10-11 09:46:01.502916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.038 [2024-10-11 09:46:01.502939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:17.038 [2024-10-11 09:46:01.502949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.038 [2024-10-11 09:46:01.503374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.038 [2024-10-11 09:46:01.503391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:17.038 [2024-10-11 09:46:01.503474] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:17.038 [2024-10-11 09:46:01.503494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:17.038 [2024-10-11 09:46:01.503636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.038 [2024-10-11 09:46:01.503645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.038 [2024-10-11 09:46:01.503950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:17.038 [2024-10-11 09:46:01.504121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.038 [2024-10-11 09:46:01.504142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:17.038 [2024-10-11 09:46:01.504284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.038 pt4 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.038 "name": "raid_bdev1", 00:12:17.038 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:17.038 "strip_size_kb": 64, 00:12:17.038 "state": "online", 00:12:17.038 "raid_level": "raid0", 00:12:17.038 "superblock": true, 00:12:17.038 "num_base_bdevs": 4, 00:12:17.038 "num_base_bdevs_discovered": 4, 00:12:17.038 "num_base_bdevs_operational": 4, 00:12:17.038 "base_bdevs_list": [ 00:12:17.038 { 00:12:17.038 "name": "pt1", 00:12:17.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.038 "is_configured": true, 00:12:17.038 "data_offset": 2048, 00:12:17.038 "data_size": 63488 00:12:17.038 }, 00:12:17.038 { 00:12:17.038 "name": "pt2", 00:12:17.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.038 "is_configured": true, 00:12:17.038 "data_offset": 2048, 00:12:17.038 "data_size": 63488 00:12:17.038 }, 00:12:17.038 { 00:12:17.038 "name": "pt3", 00:12:17.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.038 "is_configured": true, 00:12:17.038 "data_offset": 2048, 00:12:17.038 "data_size": 63488 00:12:17.038 }, 00:12:17.038 { 00:12:17.038 "name": "pt4", 00:12:17.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.038 "is_configured": true, 00:12:17.038 "data_offset": 2048, 00:12:17.038 "data_size": 63488 00:12:17.038 } 00:12:17.038 ] 00:12:17.038 }' 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.038 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.606 [2024-10-11 09:46:01.954491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.606 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.606 "name": "raid_bdev1", 00:12:17.606 "aliases": [ 00:12:17.606 "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1" 00:12:17.606 ], 00:12:17.606 "product_name": "Raid Volume", 00:12:17.606 "block_size": 512, 00:12:17.606 "num_blocks": 253952, 00:12:17.606 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:17.606 "assigned_rate_limits": { 00:12:17.606 "rw_ios_per_sec": 0, 00:12:17.606 "rw_mbytes_per_sec": 0, 00:12:17.606 "r_mbytes_per_sec": 0, 00:12:17.607 "w_mbytes_per_sec": 0 00:12:17.607 }, 00:12:17.607 "claimed": false, 00:12:17.607 "zoned": false, 00:12:17.607 "supported_io_types": { 00:12:17.607 "read": true, 00:12:17.607 "write": true, 00:12:17.607 "unmap": true, 00:12:17.607 "flush": true, 00:12:17.607 "reset": true, 00:12:17.607 "nvme_admin": false, 00:12:17.607 "nvme_io": false, 00:12:17.607 "nvme_io_md": false, 00:12:17.607 "write_zeroes": true, 00:12:17.607 "zcopy": false, 00:12:17.607 "get_zone_info": false, 00:12:17.607 "zone_management": false, 00:12:17.607 "zone_append": false, 00:12:17.607 "compare": false, 00:12:17.607 "compare_and_write": false, 00:12:17.607 "abort": false, 00:12:17.607 "seek_hole": false, 00:12:17.607 "seek_data": false, 00:12:17.607 "copy": false, 00:12:17.607 "nvme_iov_md": false 00:12:17.607 }, 00:12:17.607 "memory_domains": [ 00:12:17.607 { 00:12:17.607 "dma_device_id": "system", 00:12:17.607 "dma_device_type": 1 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.607 "dma_device_type": 2 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "system", 00:12:17.607 "dma_device_type": 1 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.607 "dma_device_type": 2 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "system", 00:12:17.607 "dma_device_type": 1 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.607 "dma_device_type": 2 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "system", 00:12:17.607 "dma_device_type": 1 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.607 "dma_device_type": 2 00:12:17.607 } 00:12:17.607 ], 00:12:17.607 "driver_specific": { 00:12:17.607 "raid": { 00:12:17.607 "uuid": "7a7c033b-c2cf-48da-b1bb-c0852eacc5c1", 00:12:17.607 "strip_size_kb": 64, 00:12:17.607 "state": "online", 00:12:17.607 "raid_level": "raid0", 00:12:17.607 "superblock": true, 00:12:17.607 "num_base_bdevs": 4, 00:12:17.607 "num_base_bdevs_discovered": 4, 00:12:17.607 "num_base_bdevs_operational": 4, 00:12:17.607 "base_bdevs_list": [ 00:12:17.607 { 00:12:17.607 "name": "pt1", 00:12:17.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.607 "is_configured": true, 00:12:17.607 "data_offset": 2048, 00:12:17.607 "data_size": 63488 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "name": "pt2", 00:12:17.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.607 "is_configured": true, 00:12:17.607 "data_offset": 2048, 00:12:17.607 "data_size": 63488 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "name": "pt3", 00:12:17.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.607 "is_configured": true, 00:12:17.607 "data_offset": 2048, 00:12:17.607 "data_size": 63488 00:12:17.607 }, 00:12:17.607 { 00:12:17.607 "name": "pt4", 00:12:17.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.607 "is_configured": true, 00:12:17.607 "data_offset": 2048, 00:12:17.607 "data_size": 63488 00:12:17.607 } 00:12:17.607 ] 00:12:17.607 } 00:12:17.607 } 00:12:17.607 }' 00:12:17.607 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:17.607 pt2 00:12:17.607 pt3 00:12:17.607 pt4' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.607 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:17.866 [2024-10-11 09:46:02.309887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7a7c033b-c2cf-48da-b1bb-c0852eacc5c1 '!=' 7a7c033b-c2cf-48da-b1bb-c0852eacc5c1 ']' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71207 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71207 ']' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71207 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71207 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71207' 00:12:17.866 killing process with pid 71207 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 71207 00:12:17.866 [2024-10-11 09:46:02.401037] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.866 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 71207 00:12:17.866 [2024-10-11 09:46:02.401230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.867 [2024-10-11 09:46:02.401322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.867 [2024-10-11 09:46:02.401394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:18.435 [2024-10-11 09:46:02.788728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.373 09:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:19.373 00:12:19.373 real 0m5.805s 00:12:19.373 user 0m8.326s 00:12:19.373 sys 0m1.005s 00:12:19.373 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.373 09:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 ************************************ 00:12:19.373 END TEST raid_superblock_test 00:12:19.373 ************************************ 00:12:19.632 09:46:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:19.632 09:46:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:19.632 09:46:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.632 09:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.632 ************************************ 00:12:19.632 START TEST raid_read_error_test 00:12:19.632 ************************************ 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qVFupGC8Bn 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71466 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71466 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71466 ']' 00:12:19.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:19.633 09:46:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 [2024-10-11 09:46:04.145357] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:19.633 [2024-10-11 09:46:04.145494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71466 ] 00:12:19.892 [2024-10-11 09:46:04.297135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.892 [2024-10-11 09:46:04.429186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.151 [2024-10-11 09:46:04.657796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.151 [2024-10-11 09:46:04.657845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 BaseBdev1_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 true 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 [2024-10-11 09:46:05.108186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:20.721 [2024-10-11 09:46:05.108254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.721 [2024-10-11 09:46:05.108279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:20.721 [2024-10-11 09:46:05.108291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.721 [2024-10-11 09:46:05.110665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.721 [2024-10-11 09:46:05.110713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.721 BaseBdev1 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 BaseBdev2_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 true 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 [2024-10-11 09:46:05.181038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:20.721 [2024-10-11 09:46:05.181213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.721 [2024-10-11 09:46:05.181240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:20.721 [2024-10-11 09:46:05.181251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.721 [2024-10-11 09:46:05.183511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.721 [2024-10-11 09:46:05.183596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.721 BaseBdev2 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 BaseBdev3_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 true 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 [2024-10-11 09:46:05.265462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:20.721 [2024-10-11 09:46:05.265538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.721 [2024-10-11 09:46:05.265560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:20.721 [2024-10-11 09:46:05.265571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.721 [2024-10-11 09:46:05.267941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.721 [2024-10-11 09:46:05.267985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:20.721 BaseBdev3 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 BaseBdev4_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 true 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 [2024-10-11 09:46:05.341298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:20.721 [2024-10-11 09:46:05.341464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.721 [2024-10-11 09:46:05.341506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:20.721 [2024-10-11 09:46:05.341544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.721 [2024-10-11 09:46:05.343902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.721 [2024-10-11 09:46:05.343985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:20.721 BaseBdev4 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.721 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.982 [2024-10-11 09:46:05.353366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.982 [2024-10-11 09:46:05.355575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.982 [2024-10-11 09:46:05.355744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.982 [2024-10-11 09:46:05.355868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.982 [2024-10-11 09:46:05.356186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:20.982 [2024-10-11 09:46:05.356248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:20.982 [2024-10-11 09:46:05.356571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:20.982 [2024-10-11 09:46:05.356849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:20.982 [2024-10-11 09:46:05.356915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:20.982 [2024-10-11 09:46:05.357143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.982 "name": "raid_bdev1", 00:12:20.982 "uuid": "3b89c722-6a8b-4034-b0d2-9374f6e1cad6", 00:12:20.982 "strip_size_kb": 64, 00:12:20.982 "state": "online", 00:12:20.982 "raid_level": "raid0", 00:12:20.982 "superblock": true, 00:12:20.982 "num_base_bdevs": 4, 00:12:20.982 "num_base_bdevs_discovered": 4, 00:12:20.982 "num_base_bdevs_operational": 4, 00:12:20.982 "base_bdevs_list": [ 00:12:20.982 { 00:12:20.982 "name": "BaseBdev1", 00:12:20.982 "uuid": "a1da66a6-105e-5207-8e53-86468f571d15", 00:12:20.982 "is_configured": true, 00:12:20.982 "data_offset": 2048, 00:12:20.982 "data_size": 63488 00:12:20.982 }, 00:12:20.982 { 00:12:20.982 "name": "BaseBdev2", 00:12:20.982 "uuid": "7bf44e80-6ac4-5494-bc63-c89998b875da", 00:12:20.982 "is_configured": true, 00:12:20.982 "data_offset": 2048, 00:12:20.982 "data_size": 63488 00:12:20.982 }, 00:12:20.982 { 00:12:20.982 "name": "BaseBdev3", 00:12:20.982 "uuid": "9a0cec0d-ce44-50d0-a85c-d5f853909abf", 00:12:20.982 "is_configured": true, 00:12:20.982 "data_offset": 2048, 00:12:20.982 "data_size": 63488 00:12:20.982 }, 00:12:20.982 { 00:12:20.982 "name": "BaseBdev4", 00:12:20.982 "uuid": "dbbbe65e-98ab-5eff-8d87-f377b1ae976e", 00:12:20.982 "is_configured": true, 00:12:20.982 "data_offset": 2048, 00:12:20.982 "data_size": 63488 00:12:20.982 } 00:12:20.982 ] 00:12:20.982 }' 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.982 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.242 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:21.242 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.509 [2024-10-11 09:46:05.950153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.445 "name": "raid_bdev1", 00:12:22.445 "uuid": "3b89c722-6a8b-4034-b0d2-9374f6e1cad6", 00:12:22.445 "strip_size_kb": 64, 00:12:22.445 "state": "online", 00:12:22.445 "raid_level": "raid0", 00:12:22.445 "superblock": true, 00:12:22.445 "num_base_bdevs": 4, 00:12:22.445 "num_base_bdevs_discovered": 4, 00:12:22.445 "num_base_bdevs_operational": 4, 00:12:22.445 "base_bdevs_list": [ 00:12:22.445 { 00:12:22.445 "name": "BaseBdev1", 00:12:22.445 "uuid": "a1da66a6-105e-5207-8e53-86468f571d15", 00:12:22.445 "is_configured": true, 00:12:22.445 "data_offset": 2048, 00:12:22.445 "data_size": 63488 00:12:22.445 }, 00:12:22.445 { 00:12:22.445 "name": "BaseBdev2", 00:12:22.445 "uuid": "7bf44e80-6ac4-5494-bc63-c89998b875da", 00:12:22.445 "is_configured": true, 00:12:22.445 "data_offset": 2048, 00:12:22.445 "data_size": 63488 00:12:22.445 }, 00:12:22.445 { 00:12:22.445 "name": "BaseBdev3", 00:12:22.445 "uuid": "9a0cec0d-ce44-50d0-a85c-d5f853909abf", 00:12:22.445 "is_configured": true, 00:12:22.445 "data_offset": 2048, 00:12:22.445 "data_size": 63488 00:12:22.445 }, 00:12:22.445 { 00:12:22.445 "name": "BaseBdev4", 00:12:22.445 "uuid": "dbbbe65e-98ab-5eff-8d87-f377b1ae976e", 00:12:22.445 "is_configured": true, 00:12:22.445 "data_offset": 2048, 00:12:22.445 "data_size": 63488 00:12:22.445 } 00:12:22.445 ] 00:12:22.445 }' 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.445 09:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.015 [2024-10-11 09:46:07.367444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.015 [2024-10-11 09:46:07.367494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.015 [2024-10-11 09:46:07.370667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.015 [2024-10-11 09:46:07.370781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.015 [2024-10-11 09:46:07.370863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.015 [2024-10-11 09:46:07.370915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:23.015 { 00:12:23.015 "results": [ 00:12:23.015 { 00:12:23.015 "job": "raid_bdev1", 00:12:23.015 "core_mask": "0x1", 00:12:23.015 "workload": "randrw", 00:12:23.015 "percentage": 50, 00:12:23.015 "status": "finished", 00:12:23.015 "queue_depth": 1, 00:12:23.015 "io_size": 131072, 00:12:23.015 "runtime": 1.417951, 00:12:23.015 "iops": 13427.826490478163, 00:12:23.015 "mibps": 1678.4783113097703, 00:12:23.015 "io_failed": 1, 00:12:23.015 "io_timeout": 0, 00:12:23.015 "avg_latency_us": 103.35645925168603, 00:12:23.015 "min_latency_us": 27.94759825327511, 00:12:23.015 "max_latency_us": 1752.8733624454148 00:12:23.015 } 00:12:23.015 ], 00:12:23.015 "core_count": 1 00:12:23.015 } 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71466 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71466 ']' 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71466 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71466 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71466' 00:12:23.015 killing process with pid 71466 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71466 00:12:23.015 [2024-10-11 09:46:07.408872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.015 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71466 00:12:23.274 [2024-10-11 09:46:07.754750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qVFupGC8Bn 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:24.654 00:12:24.654 real 0m4.935s 00:12:24.654 user 0m5.915s 00:12:24.654 sys 0m0.624s 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.654 ************************************ 00:12:24.654 END TEST raid_read_error_test 00:12:24.654 ************************************ 00:12:24.654 09:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.654 09:46:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:24.654 09:46:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:24.654 09:46:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.654 09:46:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.654 ************************************ 00:12:24.654 START TEST raid_write_error_test 00:12:24.654 ************************************ 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zN6W4iQ8JY 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71617 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71617 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71617 ']' 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.654 09:46:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.654 [2024-10-11 09:46:09.155283] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:24.654 [2024-10-11 09:46:09.155406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71617 ] 00:12:24.914 [2024-10-11 09:46:09.322368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.914 [2024-10-11 09:46:09.463860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.174 [2024-10-11 09:46:09.709433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.174 [2024-10-11 09:46:09.709618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 BaseBdev1_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 true 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 [2024-10-11 09:46:10.146246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:25.743 [2024-10-11 09:46:10.146334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.743 [2024-10-11 09:46:10.146363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:25.743 [2024-10-11 09:46:10.146377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.743 [2024-10-11 09:46:10.149018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.743 [2024-10-11 09:46:10.149084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.743 BaseBdev1 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 BaseBdev2_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 true 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 [2024-10-11 09:46:10.221967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:25.743 [2024-10-11 09:46:10.222058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.743 [2024-10-11 09:46:10.222083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:25.743 [2024-10-11 09:46:10.222096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.743 [2024-10-11 09:46:10.224863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.743 [2024-10-11 09:46:10.225038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.743 BaseBdev2 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 BaseBdev3_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 true 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.743 [2024-10-11 09:46:10.330456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:25.743 [2024-10-11 09:46:10.330559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.743 [2024-10-11 09:46:10.330588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:25.743 [2024-10-11 09:46:10.330601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.743 [2024-10-11 09:46:10.333372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.743 [2024-10-11 09:46:10.333535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.743 BaseBdev3 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.743 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.004 BaseBdev4_malloc 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.004 true 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.004 [2024-10-11 09:46:10.406923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:26.004 [2024-10-11 09:46:10.407003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.004 [2024-10-11 09:46:10.407031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.004 [2024-10-11 09:46:10.407045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.004 [2024-10-11 09:46:10.409653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.004 [2024-10-11 09:46:10.409712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:26.004 BaseBdev4 00:12:26.004 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.005 [2024-10-11 09:46:10.418990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.005 [2024-10-11 09:46:10.421286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.005 [2024-10-11 09:46:10.421387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.005 [2024-10-11 09:46:10.421458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:26.005 [2024-10-11 09:46:10.421699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:26.005 [2024-10-11 09:46:10.421714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:26.005 [2024-10-11 09:46:10.422024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:26.005 [2024-10-11 09:46:10.422200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:26.005 [2024-10-11 09:46:10.422291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:26.005 [2024-10-11 09:46:10.422536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.005 "name": "raid_bdev1", 00:12:26.005 "uuid": "7e2b9641-468d-4960-bc0b-99354f68500a", 00:12:26.005 "strip_size_kb": 64, 00:12:26.005 "state": "online", 00:12:26.005 "raid_level": "raid0", 00:12:26.005 "superblock": true, 00:12:26.005 "num_base_bdevs": 4, 00:12:26.005 "num_base_bdevs_discovered": 4, 00:12:26.005 "num_base_bdevs_operational": 4, 00:12:26.005 "base_bdevs_list": [ 00:12:26.005 { 00:12:26.005 "name": "BaseBdev1", 00:12:26.005 "uuid": "da039943-ad86-50b9-8d24-3cc7d074a89f", 00:12:26.005 "is_configured": true, 00:12:26.005 "data_offset": 2048, 00:12:26.005 "data_size": 63488 00:12:26.005 }, 00:12:26.005 { 00:12:26.005 "name": "BaseBdev2", 00:12:26.005 "uuid": "4d1ebd42-5656-575f-864d-bafc04ef9eef", 00:12:26.005 "is_configured": true, 00:12:26.005 "data_offset": 2048, 00:12:26.005 "data_size": 63488 00:12:26.005 }, 00:12:26.005 { 00:12:26.005 "name": "BaseBdev3", 00:12:26.005 "uuid": "b4a4f78a-ee72-52dc-8e77-5972d4e70204", 00:12:26.005 "is_configured": true, 00:12:26.005 "data_offset": 2048, 00:12:26.005 "data_size": 63488 00:12:26.005 }, 00:12:26.005 { 00:12:26.005 "name": "BaseBdev4", 00:12:26.005 "uuid": "034de6f9-2a39-5b07-962a-19a497a45e06", 00:12:26.005 "is_configured": true, 00:12:26.005 "data_offset": 2048, 00:12:26.005 "data_size": 63488 00:12:26.005 } 00:12:26.005 ] 00:12:26.005 }' 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.005 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.575 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:26.575 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:26.575 [2024-10-11 09:46:11.028029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.514 "name": "raid_bdev1", 00:12:27.514 "uuid": "7e2b9641-468d-4960-bc0b-99354f68500a", 00:12:27.514 "strip_size_kb": 64, 00:12:27.514 "state": "online", 00:12:27.514 "raid_level": "raid0", 00:12:27.514 "superblock": true, 00:12:27.514 "num_base_bdevs": 4, 00:12:27.514 "num_base_bdevs_discovered": 4, 00:12:27.514 "num_base_bdevs_operational": 4, 00:12:27.514 "base_bdevs_list": [ 00:12:27.514 { 00:12:27.514 "name": "BaseBdev1", 00:12:27.514 "uuid": "da039943-ad86-50b9-8d24-3cc7d074a89f", 00:12:27.514 "is_configured": true, 00:12:27.514 "data_offset": 2048, 00:12:27.514 "data_size": 63488 00:12:27.514 }, 00:12:27.514 { 00:12:27.514 "name": "BaseBdev2", 00:12:27.514 "uuid": "4d1ebd42-5656-575f-864d-bafc04ef9eef", 00:12:27.514 "is_configured": true, 00:12:27.514 "data_offset": 2048, 00:12:27.514 "data_size": 63488 00:12:27.514 }, 00:12:27.514 { 00:12:27.514 "name": "BaseBdev3", 00:12:27.514 "uuid": "b4a4f78a-ee72-52dc-8e77-5972d4e70204", 00:12:27.514 "is_configured": true, 00:12:27.514 "data_offset": 2048, 00:12:27.514 "data_size": 63488 00:12:27.514 }, 00:12:27.514 { 00:12:27.514 "name": "BaseBdev4", 00:12:27.514 "uuid": "034de6f9-2a39-5b07-962a-19a497a45e06", 00:12:27.514 "is_configured": true, 00:12:27.514 "data_offset": 2048, 00:12:27.514 "data_size": 63488 00:12:27.514 } 00:12:27.514 ] 00:12:27.514 }' 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.514 09:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.773 [2024-10-11 09:46:12.397630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.773 [2024-10-11 09:46:12.397800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.773 [2024-10-11 09:46:12.401331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.773 [2024-10-11 09:46:12.401403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.773 [2024-10-11 09:46:12.401474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.773 [2024-10-11 09:46:12.401497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:27.773 { 00:12:27.773 "results": [ 00:12:27.773 { 00:12:27.773 "job": "raid_bdev1", 00:12:27.773 "core_mask": "0x1", 00:12:27.773 "workload": "randrw", 00:12:27.773 "percentage": 50, 00:12:27.773 "status": "finished", 00:12:27.773 "queue_depth": 1, 00:12:27.773 "io_size": 131072, 00:12:27.773 "runtime": 1.369853, 00:12:27.773 "iops": 12794.073524677466, 00:12:27.773 "mibps": 1599.2591905846832, 00:12:27.773 "io_failed": 1, 00:12:27.773 "io_timeout": 0, 00:12:27.773 "avg_latency_us": 108.65847611781996, 00:12:27.773 "min_latency_us": 27.83580786026201, 00:12:27.773 "max_latency_us": 1788.646288209607 00:12:27.773 } 00:12:27.773 ], 00:12:27.773 "core_count": 1 00:12:27.773 } 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71617 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71617 ']' 00:12:27.773 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71617 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71617 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71617' 00:12:28.032 killing process with pid 71617 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71617 00:12:28.032 [2024-10-11 09:46:12.447403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.032 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71617 00:12:28.292 [2024-10-11 09:46:12.809378] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.670 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zN6W4iQ8JY 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:29.671 00:12:29.671 real 0m5.104s 00:12:29.671 user 0m6.033s 00:12:29.671 sys 0m0.643s 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.671 09:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.671 ************************************ 00:12:29.671 END TEST raid_write_error_test 00:12:29.671 ************************************ 00:12:29.671 09:46:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:29.671 09:46:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:29.671 09:46:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:29.671 09:46:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.671 09:46:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.671 ************************************ 00:12:29.671 START TEST raid_state_function_test 00:12:29.671 ************************************ 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:29.671 Process raid pid: 71761 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71761 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71761' 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71761 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71761 ']' 00:12:29.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.671 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.942 [2024-10-11 09:46:14.334809] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:29.942 [2024-10-11 09:46:14.334970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.942 [2024-10-11 09:46:14.513188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.217 [2024-10-11 09:46:14.693755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.477 [2024-10-11 09:46:14.999955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.477 [2024-10-11 09:46:15.000010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.737 [2024-10-11 09:46:15.235092] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.737 [2024-10-11 09:46:15.235172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.737 [2024-10-11 09:46:15.235183] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.737 [2024-10-11 09:46:15.235193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.737 [2024-10-11 09:46:15.235200] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.737 [2024-10-11 09:46:15.235210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.737 [2024-10-11 09:46:15.235216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.737 [2024-10-11 09:46:15.235225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.737 "name": "Existed_Raid", 00:12:30.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.737 "strip_size_kb": 64, 00:12:30.737 "state": "configuring", 00:12:30.737 "raid_level": "concat", 00:12:30.737 "superblock": false, 00:12:30.737 "num_base_bdevs": 4, 00:12:30.737 "num_base_bdevs_discovered": 0, 00:12:30.737 "num_base_bdevs_operational": 4, 00:12:30.737 "base_bdevs_list": [ 00:12:30.737 { 00:12:30.737 "name": "BaseBdev1", 00:12:30.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.737 "is_configured": false, 00:12:30.737 "data_offset": 0, 00:12:30.737 "data_size": 0 00:12:30.737 }, 00:12:30.737 { 00:12:30.737 "name": "BaseBdev2", 00:12:30.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.737 "is_configured": false, 00:12:30.737 "data_offset": 0, 00:12:30.737 "data_size": 0 00:12:30.737 }, 00:12:30.737 { 00:12:30.737 "name": "BaseBdev3", 00:12:30.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.737 "is_configured": false, 00:12:30.737 "data_offset": 0, 00:12:30.737 "data_size": 0 00:12:30.737 }, 00:12:30.737 { 00:12:30.737 "name": "BaseBdev4", 00:12:30.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.737 "is_configured": false, 00:12:30.737 "data_offset": 0, 00:12:30.737 "data_size": 0 00:12:30.737 } 00:12:30.737 ] 00:12:30.737 }' 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.737 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.307 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.307 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.307 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.307 [2024-10-11 09:46:15.746177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.307 [2024-10-11 09:46:15.746322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:31.307 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.307 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.307 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 [2024-10-11 09:46:15.754163] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.308 [2024-10-11 09:46:15.754251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.308 [2024-10-11 09:46:15.754279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.308 [2024-10-11 09:46:15.754301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.308 [2024-10-11 09:46:15.754318] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.308 [2024-10-11 09:46:15.754338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.308 [2024-10-11 09:46:15.754355] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.308 [2024-10-11 09:46:15.754375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 [2024-10-11 09:46:15.800576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.308 BaseBdev1 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 [ 00:12:31.308 { 00:12:31.308 "name": "BaseBdev1", 00:12:31.308 "aliases": [ 00:12:31.308 "81c58769-9330-4a92-85f5-98c49c3b540d" 00:12:31.308 ], 00:12:31.308 "product_name": "Malloc disk", 00:12:31.308 "block_size": 512, 00:12:31.308 "num_blocks": 65536, 00:12:31.308 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:31.308 "assigned_rate_limits": { 00:12:31.308 "rw_ios_per_sec": 0, 00:12:31.308 "rw_mbytes_per_sec": 0, 00:12:31.308 "r_mbytes_per_sec": 0, 00:12:31.308 "w_mbytes_per_sec": 0 00:12:31.308 }, 00:12:31.308 "claimed": true, 00:12:31.308 "claim_type": "exclusive_write", 00:12:31.308 "zoned": false, 00:12:31.308 "supported_io_types": { 00:12:31.308 "read": true, 00:12:31.308 "write": true, 00:12:31.308 "unmap": true, 00:12:31.308 "flush": true, 00:12:31.308 "reset": true, 00:12:31.308 "nvme_admin": false, 00:12:31.308 "nvme_io": false, 00:12:31.308 "nvme_io_md": false, 00:12:31.308 "write_zeroes": true, 00:12:31.308 "zcopy": true, 00:12:31.308 "get_zone_info": false, 00:12:31.308 "zone_management": false, 00:12:31.308 "zone_append": false, 00:12:31.308 "compare": false, 00:12:31.308 "compare_and_write": false, 00:12:31.308 "abort": true, 00:12:31.308 "seek_hole": false, 00:12:31.308 "seek_data": false, 00:12:31.308 "copy": true, 00:12:31.308 "nvme_iov_md": false 00:12:31.308 }, 00:12:31.308 "memory_domains": [ 00:12:31.308 { 00:12:31.308 "dma_device_id": "system", 00:12:31.308 "dma_device_type": 1 00:12:31.308 }, 00:12:31.308 { 00:12:31.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.308 "dma_device_type": 2 00:12:31.308 } 00:12:31.308 ], 00:12:31.308 "driver_specific": {} 00:12:31.308 } 00:12:31.308 ] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.308 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.308 "name": "Existed_Raid", 00:12:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.308 "strip_size_kb": 64, 00:12:31.308 "state": "configuring", 00:12:31.308 "raid_level": "concat", 00:12:31.308 "superblock": false, 00:12:31.308 "num_base_bdevs": 4, 00:12:31.308 "num_base_bdevs_discovered": 1, 00:12:31.308 "num_base_bdevs_operational": 4, 00:12:31.308 "base_bdevs_list": [ 00:12:31.308 { 00:12:31.308 "name": "BaseBdev1", 00:12:31.308 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:31.308 "is_configured": true, 00:12:31.308 "data_offset": 0, 00:12:31.308 "data_size": 65536 00:12:31.308 }, 00:12:31.308 { 00:12:31.308 "name": "BaseBdev2", 00:12:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.308 "is_configured": false, 00:12:31.308 "data_offset": 0, 00:12:31.308 "data_size": 0 00:12:31.308 }, 00:12:31.308 { 00:12:31.308 "name": "BaseBdev3", 00:12:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.308 "is_configured": false, 00:12:31.308 "data_offset": 0, 00:12:31.308 "data_size": 0 00:12:31.308 }, 00:12:31.308 { 00:12:31.308 "name": "BaseBdev4", 00:12:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.308 "is_configured": false, 00:12:31.308 "data_offset": 0, 00:12:31.309 "data_size": 0 00:12:31.309 } 00:12:31.309 ] 00:12:31.309 }' 00:12:31.309 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.309 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.877 [2024-10-11 09:46:16.355847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.877 [2024-10-11 09:46:16.355927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.877 [2024-10-11 09:46:16.367927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.877 [2024-10-11 09:46:16.370251] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.877 [2024-10-11 09:46:16.370305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.877 [2024-10-11 09:46:16.370318] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.877 [2024-10-11 09:46:16.370332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.877 [2024-10-11 09:46:16.370340] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.877 [2024-10-11 09:46:16.370350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.877 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.878 "name": "Existed_Raid", 00:12:31.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.878 "strip_size_kb": 64, 00:12:31.878 "state": "configuring", 00:12:31.878 "raid_level": "concat", 00:12:31.878 "superblock": false, 00:12:31.878 "num_base_bdevs": 4, 00:12:31.878 "num_base_bdevs_discovered": 1, 00:12:31.878 "num_base_bdevs_operational": 4, 00:12:31.878 "base_bdevs_list": [ 00:12:31.878 { 00:12:31.878 "name": "BaseBdev1", 00:12:31.878 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:31.878 "is_configured": true, 00:12:31.878 "data_offset": 0, 00:12:31.878 "data_size": 65536 00:12:31.878 }, 00:12:31.878 { 00:12:31.878 "name": "BaseBdev2", 00:12:31.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.878 "is_configured": false, 00:12:31.878 "data_offset": 0, 00:12:31.878 "data_size": 0 00:12:31.878 }, 00:12:31.878 { 00:12:31.878 "name": "BaseBdev3", 00:12:31.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.878 "is_configured": false, 00:12:31.878 "data_offset": 0, 00:12:31.878 "data_size": 0 00:12:31.878 }, 00:12:31.878 { 00:12:31.878 "name": "BaseBdev4", 00:12:31.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.878 "is_configured": false, 00:12:31.878 "data_offset": 0, 00:12:31.878 "data_size": 0 00:12:31.878 } 00:12:31.878 ] 00:12:31.878 }' 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.878 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.447 [2024-10-11 09:46:16.889247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.447 BaseBdev2 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.447 [ 00:12:32.447 { 00:12:32.447 "name": "BaseBdev2", 00:12:32.447 "aliases": [ 00:12:32.447 "063cf86b-265f-4dab-a284-9303ab284460" 00:12:32.447 ], 00:12:32.447 "product_name": "Malloc disk", 00:12:32.447 "block_size": 512, 00:12:32.447 "num_blocks": 65536, 00:12:32.447 "uuid": "063cf86b-265f-4dab-a284-9303ab284460", 00:12:32.447 "assigned_rate_limits": { 00:12:32.447 "rw_ios_per_sec": 0, 00:12:32.447 "rw_mbytes_per_sec": 0, 00:12:32.447 "r_mbytes_per_sec": 0, 00:12:32.447 "w_mbytes_per_sec": 0 00:12:32.447 }, 00:12:32.447 "claimed": true, 00:12:32.447 "claim_type": "exclusive_write", 00:12:32.447 "zoned": false, 00:12:32.447 "supported_io_types": { 00:12:32.447 "read": true, 00:12:32.447 "write": true, 00:12:32.447 "unmap": true, 00:12:32.447 "flush": true, 00:12:32.447 "reset": true, 00:12:32.447 "nvme_admin": false, 00:12:32.447 "nvme_io": false, 00:12:32.447 "nvme_io_md": false, 00:12:32.447 "write_zeroes": true, 00:12:32.447 "zcopy": true, 00:12:32.447 "get_zone_info": false, 00:12:32.447 "zone_management": false, 00:12:32.447 "zone_append": false, 00:12:32.447 "compare": false, 00:12:32.447 "compare_and_write": false, 00:12:32.447 "abort": true, 00:12:32.447 "seek_hole": false, 00:12:32.447 "seek_data": false, 00:12:32.447 "copy": true, 00:12:32.447 "nvme_iov_md": false 00:12:32.447 }, 00:12:32.447 "memory_domains": [ 00:12:32.447 { 00:12:32.447 "dma_device_id": "system", 00:12:32.447 "dma_device_type": 1 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.447 "dma_device_type": 2 00:12:32.447 } 00:12:32.447 ], 00:12:32.447 "driver_specific": {} 00:12:32.447 } 00:12:32.447 ] 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.447 "name": "Existed_Raid", 00:12:32.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.447 "strip_size_kb": 64, 00:12:32.447 "state": "configuring", 00:12:32.447 "raid_level": "concat", 00:12:32.447 "superblock": false, 00:12:32.447 "num_base_bdevs": 4, 00:12:32.447 "num_base_bdevs_discovered": 2, 00:12:32.447 "num_base_bdevs_operational": 4, 00:12:32.447 "base_bdevs_list": [ 00:12:32.447 { 00:12:32.447 "name": "BaseBdev1", 00:12:32.447 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev2", 00:12:32.447 "uuid": "063cf86b-265f-4dab-a284-9303ab284460", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev3", 00:12:32.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.447 "is_configured": false, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 0 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev4", 00:12:32.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.447 "is_configured": false, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 0 00:12:32.447 } 00:12:32.447 ] 00:12:32.447 }' 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.447 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.016 [2024-10-11 09:46:17.444408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.016 BaseBdev3 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.016 [ 00:12:33.016 { 00:12:33.016 "name": "BaseBdev3", 00:12:33.016 "aliases": [ 00:12:33.016 "71b73952-9689-4d7a-a33d-52452d80c045" 00:12:33.016 ], 00:12:33.016 "product_name": "Malloc disk", 00:12:33.016 "block_size": 512, 00:12:33.016 "num_blocks": 65536, 00:12:33.016 "uuid": "71b73952-9689-4d7a-a33d-52452d80c045", 00:12:33.016 "assigned_rate_limits": { 00:12:33.016 "rw_ios_per_sec": 0, 00:12:33.016 "rw_mbytes_per_sec": 0, 00:12:33.016 "r_mbytes_per_sec": 0, 00:12:33.016 "w_mbytes_per_sec": 0 00:12:33.016 }, 00:12:33.016 "claimed": true, 00:12:33.016 "claim_type": "exclusive_write", 00:12:33.016 "zoned": false, 00:12:33.016 "supported_io_types": { 00:12:33.016 "read": true, 00:12:33.016 "write": true, 00:12:33.016 "unmap": true, 00:12:33.016 "flush": true, 00:12:33.016 "reset": true, 00:12:33.016 "nvme_admin": false, 00:12:33.016 "nvme_io": false, 00:12:33.016 "nvme_io_md": false, 00:12:33.016 "write_zeroes": true, 00:12:33.016 "zcopy": true, 00:12:33.016 "get_zone_info": false, 00:12:33.016 "zone_management": false, 00:12:33.016 "zone_append": false, 00:12:33.016 "compare": false, 00:12:33.016 "compare_and_write": false, 00:12:33.016 "abort": true, 00:12:33.016 "seek_hole": false, 00:12:33.016 "seek_data": false, 00:12:33.016 "copy": true, 00:12:33.016 "nvme_iov_md": false 00:12:33.016 }, 00:12:33.016 "memory_domains": [ 00:12:33.016 { 00:12:33.016 "dma_device_id": "system", 00:12:33.016 "dma_device_type": 1 00:12:33.016 }, 00:12:33.016 { 00:12:33.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.016 "dma_device_type": 2 00:12:33.016 } 00:12:33.016 ], 00:12:33.016 "driver_specific": {} 00:12:33.016 } 00:12:33.016 ] 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.016 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.016 "name": "Existed_Raid", 00:12:33.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.016 "strip_size_kb": 64, 00:12:33.016 "state": "configuring", 00:12:33.016 "raid_level": "concat", 00:12:33.016 "superblock": false, 00:12:33.016 "num_base_bdevs": 4, 00:12:33.016 "num_base_bdevs_discovered": 3, 00:12:33.016 "num_base_bdevs_operational": 4, 00:12:33.016 "base_bdevs_list": [ 00:12:33.016 { 00:12:33.016 "name": "BaseBdev1", 00:12:33.016 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:33.016 "is_configured": true, 00:12:33.016 "data_offset": 0, 00:12:33.016 "data_size": 65536 00:12:33.016 }, 00:12:33.016 { 00:12:33.016 "name": "BaseBdev2", 00:12:33.016 "uuid": "063cf86b-265f-4dab-a284-9303ab284460", 00:12:33.016 "is_configured": true, 00:12:33.016 "data_offset": 0, 00:12:33.016 "data_size": 65536 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "name": "BaseBdev3", 00:12:33.017 "uuid": "71b73952-9689-4d7a-a33d-52452d80c045", 00:12:33.017 "is_configured": true, 00:12:33.017 "data_offset": 0, 00:12:33.017 "data_size": 65536 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "name": "BaseBdev4", 00:12:33.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.017 "is_configured": false, 00:12:33.017 "data_offset": 0, 00:12:33.017 "data_size": 0 00:12:33.017 } 00:12:33.017 ] 00:12:33.017 }' 00:12:33.017 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.017 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.587 [2024-10-11 09:46:17.972730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.587 [2024-10-11 09:46:17.972819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.587 [2024-10-11 09:46:17.972829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:33.587 [2024-10-11 09:46:17.973162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.587 [2024-10-11 09:46:17.973393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.587 [2024-10-11 09:46:17.973410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.587 [2024-10-11 09:46:17.973707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.587 BaseBdev4 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.587 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.587 [ 00:12:33.587 { 00:12:33.587 "name": "BaseBdev4", 00:12:33.587 "aliases": [ 00:12:33.587 "93dbf6ce-670c-48e3-92e3-e55519bbf023" 00:12:33.587 ], 00:12:33.587 "product_name": "Malloc disk", 00:12:33.587 "block_size": 512, 00:12:33.587 "num_blocks": 65536, 00:12:33.587 "uuid": "93dbf6ce-670c-48e3-92e3-e55519bbf023", 00:12:33.587 "assigned_rate_limits": { 00:12:33.587 "rw_ios_per_sec": 0, 00:12:33.587 "rw_mbytes_per_sec": 0, 00:12:33.587 "r_mbytes_per_sec": 0, 00:12:33.587 "w_mbytes_per_sec": 0 00:12:33.587 }, 00:12:33.587 "claimed": true, 00:12:33.587 "claim_type": "exclusive_write", 00:12:33.587 "zoned": false, 00:12:33.587 "supported_io_types": { 00:12:33.587 "read": true, 00:12:33.587 "write": true, 00:12:33.587 "unmap": true, 00:12:33.587 "flush": true, 00:12:33.587 "reset": true, 00:12:33.587 "nvme_admin": false, 00:12:33.587 "nvme_io": false, 00:12:33.587 "nvme_io_md": false, 00:12:33.587 "write_zeroes": true, 00:12:33.587 "zcopy": true, 00:12:33.587 "get_zone_info": false, 00:12:33.587 "zone_management": false, 00:12:33.587 "zone_append": false, 00:12:33.587 "compare": false, 00:12:33.587 "compare_and_write": false, 00:12:33.587 "abort": true, 00:12:33.587 "seek_hole": false, 00:12:33.587 "seek_data": false, 00:12:33.587 "copy": true, 00:12:33.587 "nvme_iov_md": false 00:12:33.587 }, 00:12:33.587 "memory_domains": [ 00:12:33.587 { 00:12:33.587 "dma_device_id": "system", 00:12:33.587 "dma_device_type": 1 00:12:33.587 }, 00:12:33.587 { 00:12:33.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.587 "dma_device_type": 2 00:12:33.587 } 00:12:33.587 ], 00:12:33.587 "driver_specific": {} 00:12:33.587 } 00:12:33.587 ] 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.587 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.588 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.588 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.588 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.588 "name": "Existed_Raid", 00:12:33.588 "uuid": "625e009b-0023-49ca-bc53-4b00b9562cc1", 00:12:33.588 "strip_size_kb": 64, 00:12:33.588 "state": "online", 00:12:33.588 "raid_level": "concat", 00:12:33.588 "superblock": false, 00:12:33.588 "num_base_bdevs": 4, 00:12:33.588 "num_base_bdevs_discovered": 4, 00:12:33.588 "num_base_bdevs_operational": 4, 00:12:33.588 "base_bdevs_list": [ 00:12:33.588 { 00:12:33.588 "name": "BaseBdev1", 00:12:33.588 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:33.588 "is_configured": true, 00:12:33.588 "data_offset": 0, 00:12:33.588 "data_size": 65536 00:12:33.588 }, 00:12:33.588 { 00:12:33.588 "name": "BaseBdev2", 00:12:33.588 "uuid": "063cf86b-265f-4dab-a284-9303ab284460", 00:12:33.588 "is_configured": true, 00:12:33.588 "data_offset": 0, 00:12:33.588 "data_size": 65536 00:12:33.588 }, 00:12:33.588 { 00:12:33.588 "name": "BaseBdev3", 00:12:33.588 "uuid": "71b73952-9689-4d7a-a33d-52452d80c045", 00:12:33.588 "is_configured": true, 00:12:33.588 "data_offset": 0, 00:12:33.588 "data_size": 65536 00:12:33.588 }, 00:12:33.588 { 00:12:33.588 "name": "BaseBdev4", 00:12:33.588 "uuid": "93dbf6ce-670c-48e3-92e3-e55519bbf023", 00:12:33.588 "is_configured": true, 00:12:33.588 "data_offset": 0, 00:12:33.588 "data_size": 65536 00:12:33.588 } 00:12:33.588 ] 00:12:33.588 }' 00:12:33.588 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.588 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.159 [2024-10-11 09:46:18.496366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.159 "name": "Existed_Raid", 00:12:34.159 "aliases": [ 00:12:34.159 "625e009b-0023-49ca-bc53-4b00b9562cc1" 00:12:34.159 ], 00:12:34.159 "product_name": "Raid Volume", 00:12:34.159 "block_size": 512, 00:12:34.159 "num_blocks": 262144, 00:12:34.159 "uuid": "625e009b-0023-49ca-bc53-4b00b9562cc1", 00:12:34.159 "assigned_rate_limits": { 00:12:34.159 "rw_ios_per_sec": 0, 00:12:34.159 "rw_mbytes_per_sec": 0, 00:12:34.159 "r_mbytes_per_sec": 0, 00:12:34.159 "w_mbytes_per_sec": 0 00:12:34.159 }, 00:12:34.159 "claimed": false, 00:12:34.159 "zoned": false, 00:12:34.159 "supported_io_types": { 00:12:34.159 "read": true, 00:12:34.159 "write": true, 00:12:34.159 "unmap": true, 00:12:34.159 "flush": true, 00:12:34.159 "reset": true, 00:12:34.159 "nvme_admin": false, 00:12:34.159 "nvme_io": false, 00:12:34.159 "nvme_io_md": false, 00:12:34.159 "write_zeroes": true, 00:12:34.159 "zcopy": false, 00:12:34.159 "get_zone_info": false, 00:12:34.159 "zone_management": false, 00:12:34.159 "zone_append": false, 00:12:34.159 "compare": false, 00:12:34.159 "compare_and_write": false, 00:12:34.159 "abort": false, 00:12:34.159 "seek_hole": false, 00:12:34.159 "seek_data": false, 00:12:34.159 "copy": false, 00:12:34.159 "nvme_iov_md": false 00:12:34.159 }, 00:12:34.159 "memory_domains": [ 00:12:34.159 { 00:12:34.159 "dma_device_id": "system", 00:12:34.159 "dma_device_type": 1 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.159 "dma_device_type": 2 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "system", 00:12:34.159 "dma_device_type": 1 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.159 "dma_device_type": 2 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "system", 00:12:34.159 "dma_device_type": 1 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.159 "dma_device_type": 2 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "system", 00:12:34.159 "dma_device_type": 1 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.159 "dma_device_type": 2 00:12:34.159 } 00:12:34.159 ], 00:12:34.159 "driver_specific": { 00:12:34.159 "raid": { 00:12:34.159 "uuid": "625e009b-0023-49ca-bc53-4b00b9562cc1", 00:12:34.159 "strip_size_kb": 64, 00:12:34.159 "state": "online", 00:12:34.159 "raid_level": "concat", 00:12:34.159 "superblock": false, 00:12:34.159 "num_base_bdevs": 4, 00:12:34.159 "num_base_bdevs_discovered": 4, 00:12:34.159 "num_base_bdevs_operational": 4, 00:12:34.159 "base_bdevs_list": [ 00:12:34.159 { 00:12:34.159 "name": "BaseBdev1", 00:12:34.159 "uuid": "81c58769-9330-4a92-85f5-98c49c3b540d", 00:12:34.159 "is_configured": true, 00:12:34.159 "data_offset": 0, 00:12:34.159 "data_size": 65536 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "name": "BaseBdev2", 00:12:34.159 "uuid": "063cf86b-265f-4dab-a284-9303ab284460", 00:12:34.159 "is_configured": true, 00:12:34.159 "data_offset": 0, 00:12:34.159 "data_size": 65536 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "name": "BaseBdev3", 00:12:34.159 "uuid": "71b73952-9689-4d7a-a33d-52452d80c045", 00:12:34.159 "is_configured": true, 00:12:34.159 "data_offset": 0, 00:12:34.159 "data_size": 65536 00:12:34.159 }, 00:12:34.159 { 00:12:34.159 "name": "BaseBdev4", 00:12:34.159 "uuid": "93dbf6ce-670c-48e3-92e3-e55519bbf023", 00:12:34.159 "is_configured": true, 00:12:34.159 "data_offset": 0, 00:12:34.159 "data_size": 65536 00:12:34.159 } 00:12:34.159 ] 00:12:34.159 } 00:12:34.159 } 00:12:34.159 }' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.159 BaseBdev2 00:12:34.159 BaseBdev3 00:12:34.159 BaseBdev4' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.159 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.160 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.420 [2024-10-11 09:46:18.807701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.420 [2024-10-11 09:46:18.807872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.420 [2024-10-11 09:46:18.807979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.420 "name": "Existed_Raid", 00:12:34.420 "uuid": "625e009b-0023-49ca-bc53-4b00b9562cc1", 00:12:34.420 "strip_size_kb": 64, 00:12:34.420 "state": "offline", 00:12:34.420 "raid_level": "concat", 00:12:34.420 "superblock": false, 00:12:34.420 "num_base_bdevs": 4, 00:12:34.420 "num_base_bdevs_discovered": 3, 00:12:34.420 "num_base_bdevs_operational": 3, 00:12:34.420 "base_bdevs_list": [ 00:12:34.420 { 00:12:34.420 "name": null, 00:12:34.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.420 "is_configured": false, 00:12:34.420 "data_offset": 0, 00:12:34.420 "data_size": 65536 00:12:34.420 }, 00:12:34.420 { 00:12:34.420 "name": "BaseBdev2", 00:12:34.420 "uuid": "063cf86b-265f-4dab-a284-9303ab284460", 00:12:34.420 "is_configured": true, 00:12:34.420 "data_offset": 0, 00:12:34.420 "data_size": 65536 00:12:34.420 }, 00:12:34.420 { 00:12:34.420 "name": "BaseBdev3", 00:12:34.420 "uuid": "71b73952-9689-4d7a-a33d-52452d80c045", 00:12:34.420 "is_configured": true, 00:12:34.420 "data_offset": 0, 00:12:34.420 "data_size": 65536 00:12:34.420 }, 00:12:34.420 { 00:12:34.420 "name": "BaseBdev4", 00:12:34.420 "uuid": "93dbf6ce-670c-48e3-92e3-e55519bbf023", 00:12:34.420 "is_configured": true, 00:12:34.420 "data_offset": 0, 00:12:34.420 "data_size": 65536 00:12:34.420 } 00:12:34.420 ] 00:12:34.420 }' 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.420 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.989 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:34.989 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.989 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.990 [2024-10-11 09:46:19.417806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.990 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.990 [2024-10-11 09:46:19.588257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.249 [2024-10-11 09:46:19.748675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:35.249 [2024-10-11 09:46:19.748778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.249 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.509 BaseBdev2 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.509 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.509 [ 00:12:35.509 { 00:12:35.509 "name": "BaseBdev2", 00:12:35.509 "aliases": [ 00:12:35.509 "0f9f43ce-1b33-44e3-9442-4c639421c4e1" 00:12:35.509 ], 00:12:35.509 "product_name": "Malloc disk", 00:12:35.509 "block_size": 512, 00:12:35.509 "num_blocks": 65536, 00:12:35.509 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:35.509 "assigned_rate_limits": { 00:12:35.509 "rw_ios_per_sec": 0, 00:12:35.509 "rw_mbytes_per_sec": 0, 00:12:35.509 "r_mbytes_per_sec": 0, 00:12:35.509 "w_mbytes_per_sec": 0 00:12:35.509 }, 00:12:35.509 "claimed": false, 00:12:35.509 "zoned": false, 00:12:35.509 "supported_io_types": { 00:12:35.509 "read": true, 00:12:35.509 "write": true, 00:12:35.509 "unmap": true, 00:12:35.509 "flush": true, 00:12:35.509 "reset": true, 00:12:35.509 "nvme_admin": false, 00:12:35.509 "nvme_io": false, 00:12:35.509 "nvme_io_md": false, 00:12:35.509 "write_zeroes": true, 00:12:35.509 "zcopy": true, 00:12:35.509 "get_zone_info": false, 00:12:35.509 "zone_management": false, 00:12:35.509 "zone_append": false, 00:12:35.509 "compare": false, 00:12:35.509 "compare_and_write": false, 00:12:35.509 "abort": true, 00:12:35.509 "seek_hole": false, 00:12:35.509 "seek_data": false, 00:12:35.509 "copy": true, 00:12:35.509 "nvme_iov_md": false 00:12:35.509 }, 00:12:35.509 "memory_domains": [ 00:12:35.509 { 00:12:35.509 "dma_device_id": "system", 00:12:35.509 "dma_device_type": 1 00:12:35.509 }, 00:12:35.509 { 00:12:35.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.509 "dma_device_type": 2 00:12:35.509 } 00:12:35.509 ], 00:12:35.509 "driver_specific": {} 00:12:35.509 } 00:12:35.509 ] 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.509 BaseBdev3 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.509 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.509 [ 00:12:35.509 { 00:12:35.509 "name": "BaseBdev3", 00:12:35.509 "aliases": [ 00:12:35.509 "4247b01b-4f3b-4136-a2a9-b8a3d847b580" 00:12:35.509 ], 00:12:35.509 "product_name": "Malloc disk", 00:12:35.509 "block_size": 512, 00:12:35.509 "num_blocks": 65536, 00:12:35.509 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:35.509 "assigned_rate_limits": { 00:12:35.509 "rw_ios_per_sec": 0, 00:12:35.509 "rw_mbytes_per_sec": 0, 00:12:35.509 "r_mbytes_per_sec": 0, 00:12:35.509 "w_mbytes_per_sec": 0 00:12:35.509 }, 00:12:35.509 "claimed": false, 00:12:35.509 "zoned": false, 00:12:35.509 "supported_io_types": { 00:12:35.509 "read": true, 00:12:35.509 "write": true, 00:12:35.509 "unmap": true, 00:12:35.509 "flush": true, 00:12:35.509 "reset": true, 00:12:35.509 "nvme_admin": false, 00:12:35.509 "nvme_io": false, 00:12:35.509 "nvme_io_md": false, 00:12:35.509 "write_zeroes": true, 00:12:35.509 "zcopy": true, 00:12:35.509 "get_zone_info": false, 00:12:35.509 "zone_management": false, 00:12:35.509 "zone_append": false, 00:12:35.509 "compare": false, 00:12:35.509 "compare_and_write": false, 00:12:35.509 "abort": true, 00:12:35.509 "seek_hole": false, 00:12:35.509 "seek_data": false, 00:12:35.510 "copy": true, 00:12:35.510 "nvme_iov_md": false 00:12:35.510 }, 00:12:35.510 "memory_domains": [ 00:12:35.510 { 00:12:35.510 "dma_device_id": "system", 00:12:35.510 "dma_device_type": 1 00:12:35.510 }, 00:12:35.510 { 00:12:35.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.510 "dma_device_type": 2 00:12:35.510 } 00:12:35.510 ], 00:12:35.510 "driver_specific": {} 00:12:35.510 } 00:12:35.510 ] 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.510 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.769 BaseBdev4 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.769 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.769 [ 00:12:35.769 { 00:12:35.769 "name": "BaseBdev4", 00:12:35.769 "aliases": [ 00:12:35.769 "3091b380-572d-4357-9320-c5cfe643bb01" 00:12:35.769 ], 00:12:35.769 "product_name": "Malloc disk", 00:12:35.769 "block_size": 512, 00:12:35.769 "num_blocks": 65536, 00:12:35.769 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:35.769 "assigned_rate_limits": { 00:12:35.769 "rw_ios_per_sec": 0, 00:12:35.769 "rw_mbytes_per_sec": 0, 00:12:35.769 "r_mbytes_per_sec": 0, 00:12:35.769 "w_mbytes_per_sec": 0 00:12:35.769 }, 00:12:35.769 "claimed": false, 00:12:35.770 "zoned": false, 00:12:35.770 "supported_io_types": { 00:12:35.770 "read": true, 00:12:35.770 "write": true, 00:12:35.770 "unmap": true, 00:12:35.770 "flush": true, 00:12:35.770 "reset": true, 00:12:35.770 "nvme_admin": false, 00:12:35.770 "nvme_io": false, 00:12:35.770 "nvme_io_md": false, 00:12:35.770 "write_zeroes": true, 00:12:35.770 "zcopy": true, 00:12:35.770 "get_zone_info": false, 00:12:35.770 "zone_management": false, 00:12:35.770 "zone_append": false, 00:12:35.770 "compare": false, 00:12:35.770 "compare_and_write": false, 00:12:35.770 "abort": true, 00:12:35.770 "seek_hole": false, 00:12:35.770 "seek_data": false, 00:12:35.770 "copy": true, 00:12:35.770 "nvme_iov_md": false 00:12:35.770 }, 00:12:35.770 "memory_domains": [ 00:12:35.770 { 00:12:35.770 "dma_device_id": "system", 00:12:35.770 "dma_device_type": 1 00:12:35.770 }, 00:12:35.770 { 00:12:35.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.770 "dma_device_type": 2 00:12:35.770 } 00:12:35.770 ], 00:12:35.770 "driver_specific": {} 00:12:35.770 } 00:12:35.770 ] 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.770 [2024-10-11 09:46:20.197455] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.770 [2024-10-11 09:46:20.197631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.770 [2024-10-11 09:46:20.197672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.770 [2024-10-11 09:46:20.200037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.770 [2024-10-11 09:46:20.200107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.770 "name": "Existed_Raid", 00:12:35.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.770 "strip_size_kb": 64, 00:12:35.770 "state": "configuring", 00:12:35.770 "raid_level": "concat", 00:12:35.770 "superblock": false, 00:12:35.770 "num_base_bdevs": 4, 00:12:35.770 "num_base_bdevs_discovered": 3, 00:12:35.770 "num_base_bdevs_operational": 4, 00:12:35.770 "base_bdevs_list": [ 00:12:35.770 { 00:12:35.770 "name": "BaseBdev1", 00:12:35.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.770 "is_configured": false, 00:12:35.770 "data_offset": 0, 00:12:35.770 "data_size": 0 00:12:35.770 }, 00:12:35.770 { 00:12:35.770 "name": "BaseBdev2", 00:12:35.770 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:35.770 "is_configured": true, 00:12:35.770 "data_offset": 0, 00:12:35.770 "data_size": 65536 00:12:35.770 }, 00:12:35.770 { 00:12:35.770 "name": "BaseBdev3", 00:12:35.770 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:35.770 "is_configured": true, 00:12:35.770 "data_offset": 0, 00:12:35.770 "data_size": 65536 00:12:35.770 }, 00:12:35.770 { 00:12:35.770 "name": "BaseBdev4", 00:12:35.770 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:35.770 "is_configured": true, 00:12:35.770 "data_offset": 0, 00:12:35.770 "data_size": 65536 00:12:35.770 } 00:12:35.770 ] 00:12:35.770 }' 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.770 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 [2024-10-11 09:46:20.696649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.339 "name": "Existed_Raid", 00:12:36.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.339 "strip_size_kb": 64, 00:12:36.339 "state": "configuring", 00:12:36.339 "raid_level": "concat", 00:12:36.339 "superblock": false, 00:12:36.339 "num_base_bdevs": 4, 00:12:36.339 "num_base_bdevs_discovered": 2, 00:12:36.339 "num_base_bdevs_operational": 4, 00:12:36.339 "base_bdevs_list": [ 00:12:36.339 { 00:12:36.339 "name": "BaseBdev1", 00:12:36.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.339 "is_configured": false, 00:12:36.339 "data_offset": 0, 00:12:36.339 "data_size": 0 00:12:36.339 }, 00:12:36.339 { 00:12:36.339 "name": null, 00:12:36.339 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:36.339 "is_configured": false, 00:12:36.339 "data_offset": 0, 00:12:36.339 "data_size": 65536 00:12:36.339 }, 00:12:36.339 { 00:12:36.339 "name": "BaseBdev3", 00:12:36.339 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:36.339 "is_configured": true, 00:12:36.339 "data_offset": 0, 00:12:36.339 "data_size": 65536 00:12:36.339 }, 00:12:36.339 { 00:12:36.339 "name": "BaseBdev4", 00:12:36.339 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:36.339 "is_configured": true, 00:12:36.339 "data_offset": 0, 00:12:36.339 "data_size": 65536 00:12:36.339 } 00:12:36.339 ] 00:12:36.339 }' 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.339 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:36.598 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.599 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.599 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.858 [2024-10-11 09:46:21.268666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.858 BaseBdev1 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.858 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.858 [ 00:12:36.858 { 00:12:36.858 "name": "BaseBdev1", 00:12:36.858 "aliases": [ 00:12:36.858 "e957bad3-21d6-494f-a287-a75e21e43be8" 00:12:36.858 ], 00:12:36.858 "product_name": "Malloc disk", 00:12:36.858 "block_size": 512, 00:12:36.858 "num_blocks": 65536, 00:12:36.858 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:36.858 "assigned_rate_limits": { 00:12:36.858 "rw_ios_per_sec": 0, 00:12:36.858 "rw_mbytes_per_sec": 0, 00:12:36.858 "r_mbytes_per_sec": 0, 00:12:36.858 "w_mbytes_per_sec": 0 00:12:36.858 }, 00:12:36.858 "claimed": true, 00:12:36.858 "claim_type": "exclusive_write", 00:12:36.858 "zoned": false, 00:12:36.858 "supported_io_types": { 00:12:36.858 "read": true, 00:12:36.858 "write": true, 00:12:36.859 "unmap": true, 00:12:36.859 "flush": true, 00:12:36.859 "reset": true, 00:12:36.859 "nvme_admin": false, 00:12:36.859 "nvme_io": false, 00:12:36.859 "nvme_io_md": false, 00:12:36.859 "write_zeroes": true, 00:12:36.859 "zcopy": true, 00:12:36.859 "get_zone_info": false, 00:12:36.859 "zone_management": false, 00:12:36.859 "zone_append": false, 00:12:36.859 "compare": false, 00:12:36.859 "compare_and_write": false, 00:12:36.859 "abort": true, 00:12:36.859 "seek_hole": false, 00:12:36.859 "seek_data": false, 00:12:36.859 "copy": true, 00:12:36.859 "nvme_iov_md": false 00:12:36.859 }, 00:12:36.859 "memory_domains": [ 00:12:36.859 { 00:12:36.859 "dma_device_id": "system", 00:12:36.859 "dma_device_type": 1 00:12:36.859 }, 00:12:36.859 { 00:12:36.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.859 "dma_device_type": 2 00:12:36.859 } 00:12:36.859 ], 00:12:36.859 "driver_specific": {} 00:12:36.859 } 00:12:36.859 ] 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.859 "name": "Existed_Raid", 00:12:36.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.859 "strip_size_kb": 64, 00:12:36.859 "state": "configuring", 00:12:36.859 "raid_level": "concat", 00:12:36.859 "superblock": false, 00:12:36.859 "num_base_bdevs": 4, 00:12:36.859 "num_base_bdevs_discovered": 3, 00:12:36.859 "num_base_bdevs_operational": 4, 00:12:36.859 "base_bdevs_list": [ 00:12:36.859 { 00:12:36.859 "name": "BaseBdev1", 00:12:36.859 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:36.859 "is_configured": true, 00:12:36.859 "data_offset": 0, 00:12:36.859 "data_size": 65536 00:12:36.859 }, 00:12:36.859 { 00:12:36.859 "name": null, 00:12:36.859 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:36.859 "is_configured": false, 00:12:36.859 "data_offset": 0, 00:12:36.859 "data_size": 65536 00:12:36.859 }, 00:12:36.859 { 00:12:36.859 "name": "BaseBdev3", 00:12:36.859 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:36.859 "is_configured": true, 00:12:36.859 "data_offset": 0, 00:12:36.859 "data_size": 65536 00:12:36.859 }, 00:12:36.859 { 00:12:36.859 "name": "BaseBdev4", 00:12:36.859 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:36.859 "is_configured": true, 00:12:36.859 "data_offset": 0, 00:12:36.859 "data_size": 65536 00:12:36.859 } 00:12:36.859 ] 00:12:36.859 }' 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.859 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 [2024-10-11 09:46:21.827907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.444 "name": "Existed_Raid", 00:12:37.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.444 "strip_size_kb": 64, 00:12:37.444 "state": "configuring", 00:12:37.444 "raid_level": "concat", 00:12:37.444 "superblock": false, 00:12:37.444 "num_base_bdevs": 4, 00:12:37.444 "num_base_bdevs_discovered": 2, 00:12:37.444 "num_base_bdevs_operational": 4, 00:12:37.444 "base_bdevs_list": [ 00:12:37.444 { 00:12:37.444 "name": "BaseBdev1", 00:12:37.444 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:37.444 "is_configured": true, 00:12:37.444 "data_offset": 0, 00:12:37.444 "data_size": 65536 00:12:37.444 }, 00:12:37.444 { 00:12:37.444 "name": null, 00:12:37.444 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:37.444 "is_configured": false, 00:12:37.444 "data_offset": 0, 00:12:37.444 "data_size": 65536 00:12:37.444 }, 00:12:37.444 { 00:12:37.444 "name": null, 00:12:37.444 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:37.444 "is_configured": false, 00:12:37.444 "data_offset": 0, 00:12:37.444 "data_size": 65536 00:12:37.444 }, 00:12:37.444 { 00:12:37.444 "name": "BaseBdev4", 00:12:37.444 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:37.444 "is_configured": true, 00:12:37.444 "data_offset": 0, 00:12:37.444 "data_size": 65536 00:12:37.444 } 00:12:37.444 ] 00:12:37.444 }' 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.444 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.704 [2024-10-11 09:46:22.323700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.704 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.963 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.963 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.963 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.963 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.963 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.963 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.964 "name": "Existed_Raid", 00:12:37.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.964 "strip_size_kb": 64, 00:12:37.964 "state": "configuring", 00:12:37.964 "raid_level": "concat", 00:12:37.964 "superblock": false, 00:12:37.964 "num_base_bdevs": 4, 00:12:37.964 "num_base_bdevs_discovered": 3, 00:12:37.964 "num_base_bdevs_operational": 4, 00:12:37.964 "base_bdevs_list": [ 00:12:37.964 { 00:12:37.964 "name": "BaseBdev1", 00:12:37.964 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:37.964 "is_configured": true, 00:12:37.964 "data_offset": 0, 00:12:37.964 "data_size": 65536 00:12:37.964 }, 00:12:37.964 { 00:12:37.964 "name": null, 00:12:37.964 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:37.964 "is_configured": false, 00:12:37.964 "data_offset": 0, 00:12:37.964 "data_size": 65536 00:12:37.964 }, 00:12:37.964 { 00:12:37.964 "name": "BaseBdev3", 00:12:37.964 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:37.964 "is_configured": true, 00:12:37.964 "data_offset": 0, 00:12:37.964 "data_size": 65536 00:12:37.964 }, 00:12:37.964 { 00:12:37.964 "name": "BaseBdev4", 00:12:37.964 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:37.964 "is_configured": true, 00:12:37.964 "data_offset": 0, 00:12:37.964 "data_size": 65536 00:12:37.964 } 00:12:37.964 ] 00:12:37.964 }' 00:12:37.964 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.964 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.223 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.223 [2024-10-11 09:46:22.834933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.482 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.482 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.482 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.482 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.483 "name": "Existed_Raid", 00:12:38.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.483 "strip_size_kb": 64, 00:12:38.483 "state": "configuring", 00:12:38.483 "raid_level": "concat", 00:12:38.483 "superblock": false, 00:12:38.483 "num_base_bdevs": 4, 00:12:38.483 "num_base_bdevs_discovered": 2, 00:12:38.483 "num_base_bdevs_operational": 4, 00:12:38.483 "base_bdevs_list": [ 00:12:38.483 { 00:12:38.483 "name": null, 00:12:38.483 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:38.483 "is_configured": false, 00:12:38.483 "data_offset": 0, 00:12:38.483 "data_size": 65536 00:12:38.483 }, 00:12:38.483 { 00:12:38.483 "name": null, 00:12:38.483 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:38.483 "is_configured": false, 00:12:38.483 "data_offset": 0, 00:12:38.483 "data_size": 65536 00:12:38.483 }, 00:12:38.483 { 00:12:38.483 "name": "BaseBdev3", 00:12:38.483 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:38.483 "is_configured": true, 00:12:38.483 "data_offset": 0, 00:12:38.483 "data_size": 65536 00:12:38.483 }, 00:12:38.483 { 00:12:38.483 "name": "BaseBdev4", 00:12:38.483 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:38.483 "is_configured": true, 00:12:38.483 "data_offset": 0, 00:12:38.483 "data_size": 65536 00:12:38.483 } 00:12:38.483 ] 00:12:38.483 }' 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.483 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.051 [2024-10-11 09:46:23.473445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.051 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.052 "name": "Existed_Raid", 00:12:39.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.052 "strip_size_kb": 64, 00:12:39.052 "state": "configuring", 00:12:39.052 "raid_level": "concat", 00:12:39.052 "superblock": false, 00:12:39.052 "num_base_bdevs": 4, 00:12:39.052 "num_base_bdevs_discovered": 3, 00:12:39.052 "num_base_bdevs_operational": 4, 00:12:39.052 "base_bdevs_list": [ 00:12:39.052 { 00:12:39.052 "name": null, 00:12:39.052 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:39.052 "is_configured": false, 00:12:39.052 "data_offset": 0, 00:12:39.052 "data_size": 65536 00:12:39.052 }, 00:12:39.052 { 00:12:39.052 "name": "BaseBdev2", 00:12:39.052 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:39.052 "is_configured": true, 00:12:39.052 "data_offset": 0, 00:12:39.052 "data_size": 65536 00:12:39.052 }, 00:12:39.052 { 00:12:39.052 "name": "BaseBdev3", 00:12:39.052 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:39.052 "is_configured": true, 00:12:39.052 "data_offset": 0, 00:12:39.052 "data_size": 65536 00:12:39.052 }, 00:12:39.052 { 00:12:39.052 "name": "BaseBdev4", 00:12:39.052 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:39.052 "is_configured": true, 00:12:39.052 "data_offset": 0, 00:12:39.052 "data_size": 65536 00:12:39.052 } 00:12:39.052 ] 00:12:39.052 }' 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.052 09:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e957bad3-21d6-494f-a287-a75e21e43be8 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 [2024-10-11 09:46:24.132643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:39.621 [2024-10-11 09:46:24.132847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.621 [2024-10-11 09:46:24.132864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:39.621 [2024-10-11 09:46:24.133194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:39.621 [2024-10-11 09:46:24.133372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.621 [2024-10-11 09:46:24.133390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:39.621 [2024-10-11 09:46:24.133675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.621 NewBaseBdev 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.621 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 [ 00:12:39.621 { 00:12:39.621 "name": "NewBaseBdev", 00:12:39.621 "aliases": [ 00:12:39.621 "e957bad3-21d6-494f-a287-a75e21e43be8" 00:12:39.621 ], 00:12:39.621 "product_name": "Malloc disk", 00:12:39.621 "block_size": 512, 00:12:39.621 "num_blocks": 65536, 00:12:39.621 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:39.621 "assigned_rate_limits": { 00:12:39.621 "rw_ios_per_sec": 0, 00:12:39.621 "rw_mbytes_per_sec": 0, 00:12:39.621 "r_mbytes_per_sec": 0, 00:12:39.621 "w_mbytes_per_sec": 0 00:12:39.621 }, 00:12:39.621 "claimed": true, 00:12:39.621 "claim_type": "exclusive_write", 00:12:39.621 "zoned": false, 00:12:39.621 "supported_io_types": { 00:12:39.621 "read": true, 00:12:39.621 "write": true, 00:12:39.621 "unmap": true, 00:12:39.621 "flush": true, 00:12:39.621 "reset": true, 00:12:39.621 "nvme_admin": false, 00:12:39.621 "nvme_io": false, 00:12:39.621 "nvme_io_md": false, 00:12:39.621 "write_zeroes": true, 00:12:39.621 "zcopy": true, 00:12:39.621 "get_zone_info": false, 00:12:39.621 "zone_management": false, 00:12:39.621 "zone_append": false, 00:12:39.621 "compare": false, 00:12:39.621 "compare_and_write": false, 00:12:39.621 "abort": true, 00:12:39.621 "seek_hole": false, 00:12:39.621 "seek_data": false, 00:12:39.621 "copy": true, 00:12:39.621 "nvme_iov_md": false 00:12:39.621 }, 00:12:39.622 "memory_domains": [ 00:12:39.622 { 00:12:39.622 "dma_device_id": "system", 00:12:39.622 "dma_device_type": 1 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.622 "dma_device_type": 2 00:12:39.622 } 00:12:39.622 ], 00:12:39.622 "driver_specific": {} 00:12:39.622 } 00:12:39.622 ] 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.622 "name": "Existed_Raid", 00:12:39.622 "uuid": "d301c094-102f-48d7-9a0d-68d82e62d561", 00:12:39.622 "strip_size_kb": 64, 00:12:39.622 "state": "online", 00:12:39.622 "raid_level": "concat", 00:12:39.622 "superblock": false, 00:12:39.622 "num_base_bdevs": 4, 00:12:39.622 "num_base_bdevs_discovered": 4, 00:12:39.622 "num_base_bdevs_operational": 4, 00:12:39.622 "base_bdevs_list": [ 00:12:39.622 { 00:12:39.622 "name": "NewBaseBdev", 00:12:39.622 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "name": "BaseBdev2", 00:12:39.622 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "name": "BaseBdev3", 00:12:39.622 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "name": "BaseBdev4", 00:12:39.622 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 } 00:12:39.622 ] 00:12:39.622 }' 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.622 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.191 [2024-10-11 09:46:24.652333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.191 "name": "Existed_Raid", 00:12:40.191 "aliases": [ 00:12:40.191 "d301c094-102f-48d7-9a0d-68d82e62d561" 00:12:40.191 ], 00:12:40.191 "product_name": "Raid Volume", 00:12:40.191 "block_size": 512, 00:12:40.191 "num_blocks": 262144, 00:12:40.191 "uuid": "d301c094-102f-48d7-9a0d-68d82e62d561", 00:12:40.191 "assigned_rate_limits": { 00:12:40.191 "rw_ios_per_sec": 0, 00:12:40.191 "rw_mbytes_per_sec": 0, 00:12:40.191 "r_mbytes_per_sec": 0, 00:12:40.191 "w_mbytes_per_sec": 0 00:12:40.191 }, 00:12:40.191 "claimed": false, 00:12:40.191 "zoned": false, 00:12:40.191 "supported_io_types": { 00:12:40.191 "read": true, 00:12:40.191 "write": true, 00:12:40.191 "unmap": true, 00:12:40.191 "flush": true, 00:12:40.191 "reset": true, 00:12:40.191 "nvme_admin": false, 00:12:40.191 "nvme_io": false, 00:12:40.191 "nvme_io_md": false, 00:12:40.191 "write_zeroes": true, 00:12:40.191 "zcopy": false, 00:12:40.191 "get_zone_info": false, 00:12:40.191 "zone_management": false, 00:12:40.191 "zone_append": false, 00:12:40.191 "compare": false, 00:12:40.191 "compare_and_write": false, 00:12:40.191 "abort": false, 00:12:40.191 "seek_hole": false, 00:12:40.191 "seek_data": false, 00:12:40.191 "copy": false, 00:12:40.191 "nvme_iov_md": false 00:12:40.191 }, 00:12:40.191 "memory_domains": [ 00:12:40.191 { 00:12:40.191 "dma_device_id": "system", 00:12:40.191 "dma_device_type": 1 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.191 "dma_device_type": 2 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "system", 00:12:40.191 "dma_device_type": 1 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.191 "dma_device_type": 2 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "system", 00:12:40.191 "dma_device_type": 1 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.191 "dma_device_type": 2 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "system", 00:12:40.191 "dma_device_type": 1 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.191 "dma_device_type": 2 00:12:40.191 } 00:12:40.191 ], 00:12:40.191 "driver_specific": { 00:12:40.191 "raid": { 00:12:40.191 "uuid": "d301c094-102f-48d7-9a0d-68d82e62d561", 00:12:40.191 "strip_size_kb": 64, 00:12:40.191 "state": "online", 00:12:40.191 "raid_level": "concat", 00:12:40.191 "superblock": false, 00:12:40.191 "num_base_bdevs": 4, 00:12:40.191 "num_base_bdevs_discovered": 4, 00:12:40.191 "num_base_bdevs_operational": 4, 00:12:40.191 "base_bdevs_list": [ 00:12:40.191 { 00:12:40.191 "name": "NewBaseBdev", 00:12:40.191 "uuid": "e957bad3-21d6-494f-a287-a75e21e43be8", 00:12:40.191 "is_configured": true, 00:12:40.191 "data_offset": 0, 00:12:40.191 "data_size": 65536 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "name": "BaseBdev2", 00:12:40.191 "uuid": "0f9f43ce-1b33-44e3-9442-4c639421c4e1", 00:12:40.191 "is_configured": true, 00:12:40.191 "data_offset": 0, 00:12:40.191 "data_size": 65536 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "name": "BaseBdev3", 00:12:40.191 "uuid": "4247b01b-4f3b-4136-a2a9-b8a3d847b580", 00:12:40.191 "is_configured": true, 00:12:40.191 "data_offset": 0, 00:12:40.191 "data_size": 65536 00:12:40.191 }, 00:12:40.191 { 00:12:40.191 "name": "BaseBdev4", 00:12:40.191 "uuid": "3091b380-572d-4357-9320-c5cfe643bb01", 00:12:40.191 "is_configured": true, 00:12:40.191 "data_offset": 0, 00:12:40.191 "data_size": 65536 00:12:40.191 } 00:12:40.191 ] 00:12:40.191 } 00:12:40.191 } 00:12:40.191 }' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.191 BaseBdev2 00:12:40.191 BaseBdev3 00:12:40.191 BaseBdev4' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.191 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.452 09:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.452 [2024-10-11 09:46:25.007385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.452 [2024-10-11 09:46:25.007441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.452 [2024-10-11 09:46:25.007545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.452 [2024-10-11 09:46:25.007625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.452 [2024-10-11 09:46:25.007638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71761 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71761 ']' 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71761 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71761 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.452 killing process with pid 71761 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71761' 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71761 00:12:40.452 [2024-10-11 09:46:25.055651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.452 09:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71761 00:12:41.020 [2024-10-11 09:46:25.507707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.397 09:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:42.397 00:12:42.397 real 0m12.500s 00:12:42.397 user 0m19.582s 00:12:42.397 sys 0m2.457s 00:12:42.397 ************************************ 00:12:42.397 END TEST raid_state_function_test 00:12:42.398 ************************************ 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 09:46:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:42.398 09:46:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:42.398 09:46:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.398 09:46:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 ************************************ 00:12:42.398 START TEST raid_state_function_test_sb 00:12:42.398 ************************************ 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:42.398 Process raid pid: 72451 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72451 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72451' 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72451 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72451 ']' 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.398 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 [2024-10-11 09:46:26.908074] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:42.398 [2024-10-11 09:46:26.908339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.657 [2024-10-11 09:46:27.067718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.657 [2024-10-11 09:46:27.191335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.915 [2024-10-11 09:46:27.416565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.915 [2024-10-11 09:46:27.416704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.174 [2024-10-11 09:46:27.768216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.174 [2024-10-11 09:46:27.768367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.174 [2024-10-11 09:46:27.768414] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.174 [2024-10-11 09:46:27.768451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.174 [2024-10-11 09:46:27.768486] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.174 [2024-10-11 09:46:27.768521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.174 [2024-10-11 09:46:27.768554] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.174 [2024-10-11 09:46:27.768590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.174 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.432 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.432 "name": "Existed_Raid", 00:12:43.432 "uuid": "ed6370c5-4a52-41bb-a87a-b4cc94b2df56", 00:12:43.432 "strip_size_kb": 64, 00:12:43.432 "state": "configuring", 00:12:43.432 "raid_level": "concat", 00:12:43.432 "superblock": true, 00:12:43.432 "num_base_bdevs": 4, 00:12:43.432 "num_base_bdevs_discovered": 0, 00:12:43.432 "num_base_bdevs_operational": 4, 00:12:43.432 "base_bdevs_list": [ 00:12:43.432 { 00:12:43.432 "name": "BaseBdev1", 00:12:43.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.432 "is_configured": false, 00:12:43.432 "data_offset": 0, 00:12:43.432 "data_size": 0 00:12:43.432 }, 00:12:43.432 { 00:12:43.432 "name": "BaseBdev2", 00:12:43.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.432 "is_configured": false, 00:12:43.432 "data_offset": 0, 00:12:43.432 "data_size": 0 00:12:43.432 }, 00:12:43.432 { 00:12:43.432 "name": "BaseBdev3", 00:12:43.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.432 "is_configured": false, 00:12:43.432 "data_offset": 0, 00:12:43.432 "data_size": 0 00:12:43.432 }, 00:12:43.432 { 00:12:43.432 "name": "BaseBdev4", 00:12:43.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.432 "is_configured": false, 00:12:43.432 "data_offset": 0, 00:12:43.432 "data_size": 0 00:12:43.432 } 00:12:43.432 ] 00:12:43.432 }' 00:12:43.432 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.432 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.691 [2024-10-11 09:46:28.259343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.691 [2024-10-11 09:46:28.259390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.691 [2024-10-11 09:46:28.271349] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.691 [2024-10-11 09:46:28.271395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.691 [2024-10-11 09:46:28.271404] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.691 [2024-10-11 09:46:28.271414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.691 [2024-10-11 09:46:28.271421] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.691 [2024-10-11 09:46:28.271429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.691 [2024-10-11 09:46:28.271436] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.691 [2024-10-11 09:46:28.271445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.691 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.950 [2024-10-11 09:46:28.323245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.950 BaseBdev1 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.950 [ 00:12:43.950 { 00:12:43.950 "name": "BaseBdev1", 00:12:43.950 "aliases": [ 00:12:43.950 "ff855a9e-bf98-4fad-98b9-1ed33b025128" 00:12:43.950 ], 00:12:43.950 "product_name": "Malloc disk", 00:12:43.950 "block_size": 512, 00:12:43.950 "num_blocks": 65536, 00:12:43.950 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:43.950 "assigned_rate_limits": { 00:12:43.950 "rw_ios_per_sec": 0, 00:12:43.950 "rw_mbytes_per_sec": 0, 00:12:43.950 "r_mbytes_per_sec": 0, 00:12:43.950 "w_mbytes_per_sec": 0 00:12:43.950 }, 00:12:43.950 "claimed": true, 00:12:43.950 "claim_type": "exclusive_write", 00:12:43.950 "zoned": false, 00:12:43.950 "supported_io_types": { 00:12:43.950 "read": true, 00:12:43.950 "write": true, 00:12:43.950 "unmap": true, 00:12:43.950 "flush": true, 00:12:43.950 "reset": true, 00:12:43.950 "nvme_admin": false, 00:12:43.950 "nvme_io": false, 00:12:43.950 "nvme_io_md": false, 00:12:43.950 "write_zeroes": true, 00:12:43.950 "zcopy": true, 00:12:43.950 "get_zone_info": false, 00:12:43.950 "zone_management": false, 00:12:43.950 "zone_append": false, 00:12:43.950 "compare": false, 00:12:43.950 "compare_and_write": false, 00:12:43.950 "abort": true, 00:12:43.950 "seek_hole": false, 00:12:43.950 "seek_data": false, 00:12:43.950 "copy": true, 00:12:43.950 "nvme_iov_md": false 00:12:43.950 }, 00:12:43.950 "memory_domains": [ 00:12:43.950 { 00:12:43.950 "dma_device_id": "system", 00:12:43.950 "dma_device_type": 1 00:12:43.950 }, 00:12:43.950 { 00:12:43.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.950 "dma_device_type": 2 00:12:43.950 } 00:12:43.950 ], 00:12:43.950 "driver_specific": {} 00:12:43.950 } 00:12:43.950 ] 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.950 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.951 "name": "Existed_Raid", 00:12:43.951 "uuid": "57e94922-58c2-419c-aa72-717bcd2def86", 00:12:43.951 "strip_size_kb": 64, 00:12:43.951 "state": "configuring", 00:12:43.951 "raid_level": "concat", 00:12:43.951 "superblock": true, 00:12:43.951 "num_base_bdevs": 4, 00:12:43.951 "num_base_bdevs_discovered": 1, 00:12:43.951 "num_base_bdevs_operational": 4, 00:12:43.951 "base_bdevs_list": [ 00:12:43.951 { 00:12:43.951 "name": "BaseBdev1", 00:12:43.951 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:43.951 "is_configured": true, 00:12:43.951 "data_offset": 2048, 00:12:43.951 "data_size": 63488 00:12:43.951 }, 00:12:43.951 { 00:12:43.951 "name": "BaseBdev2", 00:12:43.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.951 "is_configured": false, 00:12:43.951 "data_offset": 0, 00:12:43.951 "data_size": 0 00:12:43.951 }, 00:12:43.951 { 00:12:43.951 "name": "BaseBdev3", 00:12:43.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.951 "is_configured": false, 00:12:43.951 "data_offset": 0, 00:12:43.951 "data_size": 0 00:12:43.951 }, 00:12:43.951 { 00:12:43.951 "name": "BaseBdev4", 00:12:43.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.951 "is_configured": false, 00:12:43.951 "data_offset": 0, 00:12:43.951 "data_size": 0 00:12:43.951 } 00:12:43.951 ] 00:12:43.951 }' 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.951 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.534 [2024-10-11 09:46:28.862437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.534 [2024-10-11 09:46:28.862582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.534 [2024-10-11 09:46:28.874481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.534 [2024-10-11 09:46:28.876783] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.534 [2024-10-11 09:46:28.876880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.534 [2024-10-11 09:46:28.876940] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.534 [2024-10-11 09:46:28.876970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.534 [2024-10-11 09:46:28.877039] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.534 [2024-10-11 09:46:28.877068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.534 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.535 "name": "Existed_Raid", 00:12:44.535 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:44.535 "strip_size_kb": 64, 00:12:44.535 "state": "configuring", 00:12:44.535 "raid_level": "concat", 00:12:44.535 "superblock": true, 00:12:44.535 "num_base_bdevs": 4, 00:12:44.535 "num_base_bdevs_discovered": 1, 00:12:44.535 "num_base_bdevs_operational": 4, 00:12:44.535 "base_bdevs_list": [ 00:12:44.535 { 00:12:44.535 "name": "BaseBdev1", 00:12:44.535 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:44.535 "is_configured": true, 00:12:44.535 "data_offset": 2048, 00:12:44.535 "data_size": 63488 00:12:44.535 }, 00:12:44.535 { 00:12:44.535 "name": "BaseBdev2", 00:12:44.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.535 "is_configured": false, 00:12:44.535 "data_offset": 0, 00:12:44.535 "data_size": 0 00:12:44.535 }, 00:12:44.535 { 00:12:44.535 "name": "BaseBdev3", 00:12:44.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.535 "is_configured": false, 00:12:44.535 "data_offset": 0, 00:12:44.535 "data_size": 0 00:12:44.535 }, 00:12:44.535 { 00:12:44.535 "name": "BaseBdev4", 00:12:44.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.535 "is_configured": false, 00:12:44.535 "data_offset": 0, 00:12:44.535 "data_size": 0 00:12:44.535 } 00:12:44.535 ] 00:12:44.535 }' 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.535 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.794 [2024-10-11 09:46:29.365201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.794 BaseBdev2 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.794 [ 00:12:44.794 { 00:12:44.794 "name": "BaseBdev2", 00:12:44.794 "aliases": [ 00:12:44.794 "2a4e0538-a545-4422-9de5-770c6ce25781" 00:12:44.794 ], 00:12:44.794 "product_name": "Malloc disk", 00:12:44.794 "block_size": 512, 00:12:44.794 "num_blocks": 65536, 00:12:44.794 "uuid": "2a4e0538-a545-4422-9de5-770c6ce25781", 00:12:44.794 "assigned_rate_limits": { 00:12:44.794 "rw_ios_per_sec": 0, 00:12:44.794 "rw_mbytes_per_sec": 0, 00:12:44.794 "r_mbytes_per_sec": 0, 00:12:44.794 "w_mbytes_per_sec": 0 00:12:44.794 }, 00:12:44.794 "claimed": true, 00:12:44.794 "claim_type": "exclusive_write", 00:12:44.794 "zoned": false, 00:12:44.794 "supported_io_types": { 00:12:44.794 "read": true, 00:12:44.794 "write": true, 00:12:44.794 "unmap": true, 00:12:44.794 "flush": true, 00:12:44.794 "reset": true, 00:12:44.794 "nvme_admin": false, 00:12:44.794 "nvme_io": false, 00:12:44.794 "nvme_io_md": false, 00:12:44.794 "write_zeroes": true, 00:12:44.794 "zcopy": true, 00:12:44.794 "get_zone_info": false, 00:12:44.794 "zone_management": false, 00:12:44.794 "zone_append": false, 00:12:44.794 "compare": false, 00:12:44.794 "compare_and_write": false, 00:12:44.794 "abort": true, 00:12:44.794 "seek_hole": false, 00:12:44.794 "seek_data": false, 00:12:44.794 "copy": true, 00:12:44.794 "nvme_iov_md": false 00:12:44.794 }, 00:12:44.794 "memory_domains": [ 00:12:44.794 { 00:12:44.794 "dma_device_id": "system", 00:12:44.794 "dma_device_type": 1 00:12:44.794 }, 00:12:44.794 { 00:12:44.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.794 "dma_device_type": 2 00:12:44.794 } 00:12:44.794 ], 00:12:44.794 "driver_specific": {} 00:12:44.794 } 00:12:44.794 ] 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.794 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.053 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.053 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.053 "name": "Existed_Raid", 00:12:45.053 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:45.053 "strip_size_kb": 64, 00:12:45.053 "state": "configuring", 00:12:45.053 "raid_level": "concat", 00:12:45.053 "superblock": true, 00:12:45.053 "num_base_bdevs": 4, 00:12:45.053 "num_base_bdevs_discovered": 2, 00:12:45.053 "num_base_bdevs_operational": 4, 00:12:45.053 "base_bdevs_list": [ 00:12:45.053 { 00:12:45.053 "name": "BaseBdev1", 00:12:45.053 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:45.053 "is_configured": true, 00:12:45.053 "data_offset": 2048, 00:12:45.053 "data_size": 63488 00:12:45.053 }, 00:12:45.053 { 00:12:45.053 "name": "BaseBdev2", 00:12:45.053 "uuid": "2a4e0538-a545-4422-9de5-770c6ce25781", 00:12:45.053 "is_configured": true, 00:12:45.053 "data_offset": 2048, 00:12:45.053 "data_size": 63488 00:12:45.053 }, 00:12:45.053 { 00:12:45.053 "name": "BaseBdev3", 00:12:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.053 "is_configured": false, 00:12:45.053 "data_offset": 0, 00:12:45.053 "data_size": 0 00:12:45.053 }, 00:12:45.053 { 00:12:45.053 "name": "BaseBdev4", 00:12:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.053 "is_configured": false, 00:12:45.053 "data_offset": 0, 00:12:45.053 "data_size": 0 00:12:45.053 } 00:12:45.053 ] 00:12:45.053 }' 00:12:45.053 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.053 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.312 [2024-10-11 09:46:29.929302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.312 BaseBdev3 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.312 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.570 [ 00:12:45.570 { 00:12:45.570 "name": "BaseBdev3", 00:12:45.570 "aliases": [ 00:12:45.570 "a2624fac-25b8-4c3f-83e8-f78dc8961423" 00:12:45.570 ], 00:12:45.570 "product_name": "Malloc disk", 00:12:45.570 "block_size": 512, 00:12:45.570 "num_blocks": 65536, 00:12:45.570 "uuid": "a2624fac-25b8-4c3f-83e8-f78dc8961423", 00:12:45.570 "assigned_rate_limits": { 00:12:45.570 "rw_ios_per_sec": 0, 00:12:45.570 "rw_mbytes_per_sec": 0, 00:12:45.570 "r_mbytes_per_sec": 0, 00:12:45.570 "w_mbytes_per_sec": 0 00:12:45.570 }, 00:12:45.570 "claimed": true, 00:12:45.570 "claim_type": "exclusive_write", 00:12:45.570 "zoned": false, 00:12:45.570 "supported_io_types": { 00:12:45.570 "read": true, 00:12:45.570 "write": true, 00:12:45.570 "unmap": true, 00:12:45.570 "flush": true, 00:12:45.570 "reset": true, 00:12:45.570 "nvme_admin": false, 00:12:45.570 "nvme_io": false, 00:12:45.570 "nvme_io_md": false, 00:12:45.570 "write_zeroes": true, 00:12:45.570 "zcopy": true, 00:12:45.570 "get_zone_info": false, 00:12:45.570 "zone_management": false, 00:12:45.570 "zone_append": false, 00:12:45.570 "compare": false, 00:12:45.570 "compare_and_write": false, 00:12:45.570 "abort": true, 00:12:45.570 "seek_hole": false, 00:12:45.570 "seek_data": false, 00:12:45.570 "copy": true, 00:12:45.570 "nvme_iov_md": false 00:12:45.570 }, 00:12:45.570 "memory_domains": [ 00:12:45.570 { 00:12:45.570 "dma_device_id": "system", 00:12:45.570 "dma_device_type": 1 00:12:45.570 }, 00:12:45.570 { 00:12:45.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.570 "dma_device_type": 2 00:12:45.570 } 00:12:45.570 ], 00:12:45.570 "driver_specific": {} 00:12:45.570 } 00:12:45.570 ] 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.570 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.570 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.570 "name": "Existed_Raid", 00:12:45.570 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:45.570 "strip_size_kb": 64, 00:12:45.570 "state": "configuring", 00:12:45.570 "raid_level": "concat", 00:12:45.570 "superblock": true, 00:12:45.570 "num_base_bdevs": 4, 00:12:45.570 "num_base_bdevs_discovered": 3, 00:12:45.570 "num_base_bdevs_operational": 4, 00:12:45.570 "base_bdevs_list": [ 00:12:45.570 { 00:12:45.570 "name": "BaseBdev1", 00:12:45.570 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:45.570 "is_configured": true, 00:12:45.570 "data_offset": 2048, 00:12:45.570 "data_size": 63488 00:12:45.570 }, 00:12:45.570 { 00:12:45.570 "name": "BaseBdev2", 00:12:45.570 "uuid": "2a4e0538-a545-4422-9de5-770c6ce25781", 00:12:45.570 "is_configured": true, 00:12:45.570 "data_offset": 2048, 00:12:45.570 "data_size": 63488 00:12:45.570 }, 00:12:45.570 { 00:12:45.570 "name": "BaseBdev3", 00:12:45.570 "uuid": "a2624fac-25b8-4c3f-83e8-f78dc8961423", 00:12:45.570 "is_configured": true, 00:12:45.570 "data_offset": 2048, 00:12:45.570 "data_size": 63488 00:12:45.570 }, 00:12:45.570 { 00:12:45.570 "name": "BaseBdev4", 00:12:45.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.570 "is_configured": false, 00:12:45.570 "data_offset": 0, 00:12:45.571 "data_size": 0 00:12:45.571 } 00:12:45.571 ] 00:12:45.571 }' 00:12:45.571 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.571 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 [2024-10-11 09:46:30.409567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:45.829 [2024-10-11 09:46:30.409974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:45.829 [2024-10-11 09:46:30.409995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:45.829 [2024-10-11 09:46:30.410320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:45.829 [2024-10-11 09:46:30.410497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:45.829 [2024-10-11 09:46:30.410511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:45.829 BaseBdev4 00:12:45.829 [2024-10-11 09:46:30.410664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.829 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.829 [ 00:12:45.829 { 00:12:45.829 "name": "BaseBdev4", 00:12:45.829 "aliases": [ 00:12:45.829 "22cd28d5-c3b5-40d6-b5c4-a640783c85fe" 00:12:45.829 ], 00:12:45.829 "product_name": "Malloc disk", 00:12:45.829 "block_size": 512, 00:12:45.829 "num_blocks": 65536, 00:12:45.829 "uuid": "22cd28d5-c3b5-40d6-b5c4-a640783c85fe", 00:12:45.829 "assigned_rate_limits": { 00:12:45.829 "rw_ios_per_sec": 0, 00:12:45.829 "rw_mbytes_per_sec": 0, 00:12:45.829 "r_mbytes_per_sec": 0, 00:12:45.829 "w_mbytes_per_sec": 0 00:12:45.829 }, 00:12:45.830 "claimed": true, 00:12:45.830 "claim_type": "exclusive_write", 00:12:45.830 "zoned": false, 00:12:45.830 "supported_io_types": { 00:12:45.830 "read": true, 00:12:45.830 "write": true, 00:12:45.830 "unmap": true, 00:12:45.830 "flush": true, 00:12:45.830 "reset": true, 00:12:45.830 "nvme_admin": false, 00:12:45.830 "nvme_io": false, 00:12:45.830 "nvme_io_md": false, 00:12:45.830 "write_zeroes": true, 00:12:45.830 "zcopy": true, 00:12:45.830 "get_zone_info": false, 00:12:45.830 "zone_management": false, 00:12:45.830 "zone_append": false, 00:12:45.830 "compare": false, 00:12:45.830 "compare_and_write": false, 00:12:45.830 "abort": true, 00:12:45.830 "seek_hole": false, 00:12:45.830 "seek_data": false, 00:12:45.830 "copy": true, 00:12:45.830 "nvme_iov_md": false 00:12:45.830 }, 00:12:45.830 "memory_domains": [ 00:12:45.830 { 00:12:45.830 "dma_device_id": "system", 00:12:45.830 "dma_device_type": 1 00:12:45.830 }, 00:12:45.830 { 00:12:45.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.830 "dma_device_type": 2 00:12:45.830 } 00:12:45.830 ], 00:12:45.830 "driver_specific": {} 00:12:45.830 } 00:12:45.830 ] 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.830 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.089 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.089 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.089 "name": "Existed_Raid", 00:12:46.089 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:46.089 "strip_size_kb": 64, 00:12:46.089 "state": "online", 00:12:46.089 "raid_level": "concat", 00:12:46.089 "superblock": true, 00:12:46.089 "num_base_bdevs": 4, 00:12:46.089 "num_base_bdevs_discovered": 4, 00:12:46.089 "num_base_bdevs_operational": 4, 00:12:46.089 "base_bdevs_list": [ 00:12:46.089 { 00:12:46.089 "name": "BaseBdev1", 00:12:46.089 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:46.089 "is_configured": true, 00:12:46.089 "data_offset": 2048, 00:12:46.089 "data_size": 63488 00:12:46.089 }, 00:12:46.089 { 00:12:46.089 "name": "BaseBdev2", 00:12:46.089 "uuid": "2a4e0538-a545-4422-9de5-770c6ce25781", 00:12:46.089 "is_configured": true, 00:12:46.089 "data_offset": 2048, 00:12:46.089 "data_size": 63488 00:12:46.089 }, 00:12:46.089 { 00:12:46.089 "name": "BaseBdev3", 00:12:46.089 "uuid": "a2624fac-25b8-4c3f-83e8-f78dc8961423", 00:12:46.089 "is_configured": true, 00:12:46.089 "data_offset": 2048, 00:12:46.089 "data_size": 63488 00:12:46.089 }, 00:12:46.089 { 00:12:46.089 "name": "BaseBdev4", 00:12:46.089 "uuid": "22cd28d5-c3b5-40d6-b5c4-a640783c85fe", 00:12:46.089 "is_configured": true, 00:12:46.089 "data_offset": 2048, 00:12:46.089 "data_size": 63488 00:12:46.089 } 00:12:46.089 ] 00:12:46.089 }' 00:12:46.089 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.089 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.347 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:46.347 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:46.347 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.347 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.348 [2024-10-11 09:46:30.901253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.348 "name": "Existed_Raid", 00:12:46.348 "aliases": [ 00:12:46.348 "6b85e3d6-84e8-48da-b08d-59d71f664610" 00:12:46.348 ], 00:12:46.348 "product_name": "Raid Volume", 00:12:46.348 "block_size": 512, 00:12:46.348 "num_blocks": 253952, 00:12:46.348 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:46.348 "assigned_rate_limits": { 00:12:46.348 "rw_ios_per_sec": 0, 00:12:46.348 "rw_mbytes_per_sec": 0, 00:12:46.348 "r_mbytes_per_sec": 0, 00:12:46.348 "w_mbytes_per_sec": 0 00:12:46.348 }, 00:12:46.348 "claimed": false, 00:12:46.348 "zoned": false, 00:12:46.348 "supported_io_types": { 00:12:46.348 "read": true, 00:12:46.348 "write": true, 00:12:46.348 "unmap": true, 00:12:46.348 "flush": true, 00:12:46.348 "reset": true, 00:12:46.348 "nvme_admin": false, 00:12:46.348 "nvme_io": false, 00:12:46.348 "nvme_io_md": false, 00:12:46.348 "write_zeroes": true, 00:12:46.348 "zcopy": false, 00:12:46.348 "get_zone_info": false, 00:12:46.348 "zone_management": false, 00:12:46.348 "zone_append": false, 00:12:46.348 "compare": false, 00:12:46.348 "compare_and_write": false, 00:12:46.348 "abort": false, 00:12:46.348 "seek_hole": false, 00:12:46.348 "seek_data": false, 00:12:46.348 "copy": false, 00:12:46.348 "nvme_iov_md": false 00:12:46.348 }, 00:12:46.348 "memory_domains": [ 00:12:46.348 { 00:12:46.348 "dma_device_id": "system", 00:12:46.348 "dma_device_type": 1 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.348 "dma_device_type": 2 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "system", 00:12:46.348 "dma_device_type": 1 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.348 "dma_device_type": 2 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "system", 00:12:46.348 "dma_device_type": 1 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.348 "dma_device_type": 2 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "system", 00:12:46.348 "dma_device_type": 1 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.348 "dma_device_type": 2 00:12:46.348 } 00:12:46.348 ], 00:12:46.348 "driver_specific": { 00:12:46.348 "raid": { 00:12:46.348 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:46.348 "strip_size_kb": 64, 00:12:46.348 "state": "online", 00:12:46.348 "raid_level": "concat", 00:12:46.348 "superblock": true, 00:12:46.348 "num_base_bdevs": 4, 00:12:46.348 "num_base_bdevs_discovered": 4, 00:12:46.348 "num_base_bdevs_operational": 4, 00:12:46.348 "base_bdevs_list": [ 00:12:46.348 { 00:12:46.348 "name": "BaseBdev1", 00:12:46.348 "uuid": "ff855a9e-bf98-4fad-98b9-1ed33b025128", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 2048, 00:12:46.348 "data_size": 63488 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "name": "BaseBdev2", 00:12:46.348 "uuid": "2a4e0538-a545-4422-9de5-770c6ce25781", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 2048, 00:12:46.348 "data_size": 63488 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "name": "BaseBdev3", 00:12:46.348 "uuid": "a2624fac-25b8-4c3f-83e8-f78dc8961423", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 2048, 00:12:46.348 "data_size": 63488 00:12:46.348 }, 00:12:46.348 { 00:12:46.348 "name": "BaseBdev4", 00:12:46.348 "uuid": "22cd28d5-c3b5-40d6-b5c4-a640783c85fe", 00:12:46.348 "is_configured": true, 00:12:46.348 "data_offset": 2048, 00:12:46.348 "data_size": 63488 00:12:46.348 } 00:12:46.348 ] 00:12:46.348 } 00:12:46.348 } 00:12:46.348 }' 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:46.348 BaseBdev2 00:12:46.348 BaseBdev3 00:12:46.348 BaseBdev4' 00:12:46.348 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.607 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.608 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.608 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.608 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.608 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.608 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.608 [2024-10-11 09:46:31.204418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.608 [2024-10-11 09:46:31.204519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.608 [2024-10-11 09:46:31.204612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.865 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.865 "name": "Existed_Raid", 00:12:46.865 "uuid": "6b85e3d6-84e8-48da-b08d-59d71f664610", 00:12:46.865 "strip_size_kb": 64, 00:12:46.865 "state": "offline", 00:12:46.865 "raid_level": "concat", 00:12:46.865 "superblock": true, 00:12:46.865 "num_base_bdevs": 4, 00:12:46.865 "num_base_bdevs_discovered": 3, 00:12:46.865 "num_base_bdevs_operational": 3, 00:12:46.865 "base_bdevs_list": [ 00:12:46.865 { 00:12:46.865 "name": null, 00:12:46.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.865 "is_configured": false, 00:12:46.865 "data_offset": 0, 00:12:46.865 "data_size": 63488 00:12:46.865 }, 00:12:46.865 { 00:12:46.865 "name": "BaseBdev2", 00:12:46.866 "uuid": "2a4e0538-a545-4422-9de5-770c6ce25781", 00:12:46.866 "is_configured": true, 00:12:46.866 "data_offset": 2048, 00:12:46.866 "data_size": 63488 00:12:46.866 }, 00:12:46.866 { 00:12:46.866 "name": "BaseBdev3", 00:12:46.866 "uuid": "a2624fac-25b8-4c3f-83e8-f78dc8961423", 00:12:46.866 "is_configured": true, 00:12:46.866 "data_offset": 2048, 00:12:46.866 "data_size": 63488 00:12:46.866 }, 00:12:46.866 { 00:12:46.866 "name": "BaseBdev4", 00:12:46.866 "uuid": "22cd28d5-c3b5-40d6-b5c4-a640783c85fe", 00:12:46.866 "is_configured": true, 00:12:46.866 "data_offset": 2048, 00:12:46.866 "data_size": 63488 00:12:46.866 } 00:12:46.866 ] 00:12:46.866 }' 00:12:46.866 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.866 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.433 [2024-10-11 09:46:31.865919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.433 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.433 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.433 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.433 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:47.433 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.433 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.433 [2024-10-11 09:46:32.025680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.693 [2024-10-11 09:46:32.189217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:47.693 [2024-10-11 09:46:32.189354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:47.693 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.952 BaseBdev2 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.952 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.952 [ 00:12:47.953 { 00:12:47.953 "name": "BaseBdev2", 00:12:47.953 "aliases": [ 00:12:47.953 "6f7abb0c-1704-412a-95f5-98eb25354f4f" 00:12:47.953 ], 00:12:47.953 "product_name": "Malloc disk", 00:12:47.953 "block_size": 512, 00:12:47.953 "num_blocks": 65536, 00:12:47.953 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:47.953 "assigned_rate_limits": { 00:12:47.953 "rw_ios_per_sec": 0, 00:12:47.953 "rw_mbytes_per_sec": 0, 00:12:47.953 "r_mbytes_per_sec": 0, 00:12:47.953 "w_mbytes_per_sec": 0 00:12:47.953 }, 00:12:47.953 "claimed": false, 00:12:47.953 "zoned": false, 00:12:47.953 "supported_io_types": { 00:12:47.953 "read": true, 00:12:47.953 "write": true, 00:12:47.953 "unmap": true, 00:12:47.953 "flush": true, 00:12:47.953 "reset": true, 00:12:47.953 "nvme_admin": false, 00:12:47.953 "nvme_io": false, 00:12:47.953 "nvme_io_md": false, 00:12:47.953 "write_zeroes": true, 00:12:47.953 "zcopy": true, 00:12:47.953 "get_zone_info": false, 00:12:47.953 "zone_management": false, 00:12:47.953 "zone_append": false, 00:12:47.953 "compare": false, 00:12:47.953 "compare_and_write": false, 00:12:47.953 "abort": true, 00:12:47.953 "seek_hole": false, 00:12:47.953 "seek_data": false, 00:12:47.953 "copy": true, 00:12:47.953 "nvme_iov_md": false 00:12:47.953 }, 00:12:47.953 "memory_domains": [ 00:12:47.953 { 00:12:47.953 "dma_device_id": "system", 00:12:47.953 "dma_device_type": 1 00:12:47.953 }, 00:12:47.953 { 00:12:47.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.953 "dma_device_type": 2 00:12:47.953 } 00:12:47.953 ], 00:12:47.953 "driver_specific": {} 00:12:47.953 } 00:12:47.953 ] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.953 BaseBdev3 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.953 [ 00:12:47.953 { 00:12:47.953 "name": "BaseBdev3", 00:12:47.953 "aliases": [ 00:12:47.953 "0f08eb5d-f548-4b60-8dc3-3a10014b2846" 00:12:47.953 ], 00:12:47.953 "product_name": "Malloc disk", 00:12:47.953 "block_size": 512, 00:12:47.953 "num_blocks": 65536, 00:12:47.953 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:47.953 "assigned_rate_limits": { 00:12:47.953 "rw_ios_per_sec": 0, 00:12:47.953 "rw_mbytes_per_sec": 0, 00:12:47.953 "r_mbytes_per_sec": 0, 00:12:47.953 "w_mbytes_per_sec": 0 00:12:47.953 }, 00:12:47.953 "claimed": false, 00:12:47.953 "zoned": false, 00:12:47.953 "supported_io_types": { 00:12:47.953 "read": true, 00:12:47.953 "write": true, 00:12:47.953 "unmap": true, 00:12:47.953 "flush": true, 00:12:47.953 "reset": true, 00:12:47.953 "nvme_admin": false, 00:12:47.953 "nvme_io": false, 00:12:47.953 "nvme_io_md": false, 00:12:47.953 "write_zeroes": true, 00:12:47.953 "zcopy": true, 00:12:47.953 "get_zone_info": false, 00:12:47.953 "zone_management": false, 00:12:47.953 "zone_append": false, 00:12:47.953 "compare": false, 00:12:47.953 "compare_and_write": false, 00:12:47.953 "abort": true, 00:12:47.953 "seek_hole": false, 00:12:47.953 "seek_data": false, 00:12:47.953 "copy": true, 00:12:47.953 "nvme_iov_md": false 00:12:47.953 }, 00:12:47.953 "memory_domains": [ 00:12:47.953 { 00:12:47.953 "dma_device_id": "system", 00:12:47.953 "dma_device_type": 1 00:12:47.953 }, 00:12:47.953 { 00:12:47.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.953 "dma_device_type": 2 00:12:47.953 } 00:12:47.953 ], 00:12:47.953 "driver_specific": {} 00:12:47.953 } 00:12:47.953 ] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.953 BaseBdev4 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.953 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.212 [ 00:12:48.212 { 00:12:48.212 "name": "BaseBdev4", 00:12:48.212 "aliases": [ 00:12:48.212 "251d9f05-f894-467e-a22f-78e32785a603" 00:12:48.212 ], 00:12:48.212 "product_name": "Malloc disk", 00:12:48.212 "block_size": 512, 00:12:48.212 "num_blocks": 65536, 00:12:48.212 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:48.212 "assigned_rate_limits": { 00:12:48.212 "rw_ios_per_sec": 0, 00:12:48.212 "rw_mbytes_per_sec": 0, 00:12:48.212 "r_mbytes_per_sec": 0, 00:12:48.212 "w_mbytes_per_sec": 0 00:12:48.212 }, 00:12:48.212 "claimed": false, 00:12:48.212 "zoned": false, 00:12:48.212 "supported_io_types": { 00:12:48.212 "read": true, 00:12:48.212 "write": true, 00:12:48.212 "unmap": true, 00:12:48.212 "flush": true, 00:12:48.212 "reset": true, 00:12:48.212 "nvme_admin": false, 00:12:48.212 "nvme_io": false, 00:12:48.212 "nvme_io_md": false, 00:12:48.212 "write_zeroes": true, 00:12:48.212 "zcopy": true, 00:12:48.212 "get_zone_info": false, 00:12:48.212 "zone_management": false, 00:12:48.212 "zone_append": false, 00:12:48.212 "compare": false, 00:12:48.212 "compare_and_write": false, 00:12:48.212 "abort": true, 00:12:48.212 "seek_hole": false, 00:12:48.212 "seek_data": false, 00:12:48.212 "copy": true, 00:12:48.212 "nvme_iov_md": false 00:12:48.212 }, 00:12:48.212 "memory_domains": [ 00:12:48.212 { 00:12:48.212 "dma_device_id": "system", 00:12:48.212 "dma_device_type": 1 00:12:48.212 }, 00:12:48.212 { 00:12:48.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.212 "dma_device_type": 2 00:12:48.212 } 00:12:48.212 ], 00:12:48.212 "driver_specific": {} 00:12:48.212 } 00:12:48.212 ] 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.212 [2024-10-11 09:46:32.625664] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.212 [2024-10-11 09:46:32.625801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.212 [2024-10-11 09:46:32.625861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.212 [2024-10-11 09:46:32.627954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.212 [2024-10-11 09:46:32.628059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.212 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.213 "name": "Existed_Raid", 00:12:48.213 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:48.213 "strip_size_kb": 64, 00:12:48.213 "state": "configuring", 00:12:48.213 "raid_level": "concat", 00:12:48.213 "superblock": true, 00:12:48.213 "num_base_bdevs": 4, 00:12:48.213 "num_base_bdevs_discovered": 3, 00:12:48.213 "num_base_bdevs_operational": 4, 00:12:48.213 "base_bdevs_list": [ 00:12:48.213 { 00:12:48.213 "name": "BaseBdev1", 00:12:48.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.213 "is_configured": false, 00:12:48.213 "data_offset": 0, 00:12:48.213 "data_size": 0 00:12:48.213 }, 00:12:48.213 { 00:12:48.213 "name": "BaseBdev2", 00:12:48.213 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:48.213 "is_configured": true, 00:12:48.213 "data_offset": 2048, 00:12:48.213 "data_size": 63488 00:12:48.213 }, 00:12:48.213 { 00:12:48.213 "name": "BaseBdev3", 00:12:48.213 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:48.213 "is_configured": true, 00:12:48.213 "data_offset": 2048, 00:12:48.213 "data_size": 63488 00:12:48.213 }, 00:12:48.213 { 00:12:48.213 "name": "BaseBdev4", 00:12:48.213 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:48.213 "is_configured": true, 00:12:48.213 "data_offset": 2048, 00:12:48.213 "data_size": 63488 00:12:48.213 } 00:12:48.213 ] 00:12:48.213 }' 00:12:48.213 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.213 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.471 [2024-10-11 09:46:33.084912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.471 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.729 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.729 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.729 "name": "Existed_Raid", 00:12:48.729 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:48.729 "strip_size_kb": 64, 00:12:48.729 "state": "configuring", 00:12:48.729 "raid_level": "concat", 00:12:48.729 "superblock": true, 00:12:48.729 "num_base_bdevs": 4, 00:12:48.729 "num_base_bdevs_discovered": 2, 00:12:48.729 "num_base_bdevs_operational": 4, 00:12:48.729 "base_bdevs_list": [ 00:12:48.729 { 00:12:48.729 "name": "BaseBdev1", 00:12:48.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.729 "is_configured": false, 00:12:48.729 "data_offset": 0, 00:12:48.729 "data_size": 0 00:12:48.729 }, 00:12:48.729 { 00:12:48.729 "name": null, 00:12:48.729 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:48.729 "is_configured": false, 00:12:48.729 "data_offset": 0, 00:12:48.729 "data_size": 63488 00:12:48.729 }, 00:12:48.729 { 00:12:48.729 "name": "BaseBdev3", 00:12:48.729 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:48.729 "is_configured": true, 00:12:48.729 "data_offset": 2048, 00:12:48.729 "data_size": 63488 00:12:48.729 }, 00:12:48.729 { 00:12:48.729 "name": "BaseBdev4", 00:12:48.729 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:48.729 "is_configured": true, 00:12:48.729 "data_offset": 2048, 00:12:48.729 "data_size": 63488 00:12:48.729 } 00:12:48.729 ] 00:12:48.729 }' 00:12:48.729 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.729 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.988 [2024-10-11 09:46:33.604003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.988 BaseBdev1 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.988 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.247 [ 00:12:49.247 { 00:12:49.247 "name": "BaseBdev1", 00:12:49.247 "aliases": [ 00:12:49.247 "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5" 00:12:49.247 ], 00:12:49.247 "product_name": "Malloc disk", 00:12:49.247 "block_size": 512, 00:12:49.247 "num_blocks": 65536, 00:12:49.247 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:49.247 "assigned_rate_limits": { 00:12:49.247 "rw_ios_per_sec": 0, 00:12:49.247 "rw_mbytes_per_sec": 0, 00:12:49.247 "r_mbytes_per_sec": 0, 00:12:49.247 "w_mbytes_per_sec": 0 00:12:49.247 }, 00:12:49.247 "claimed": true, 00:12:49.247 "claim_type": "exclusive_write", 00:12:49.247 "zoned": false, 00:12:49.247 "supported_io_types": { 00:12:49.247 "read": true, 00:12:49.247 "write": true, 00:12:49.247 "unmap": true, 00:12:49.247 "flush": true, 00:12:49.247 "reset": true, 00:12:49.247 "nvme_admin": false, 00:12:49.247 "nvme_io": false, 00:12:49.247 "nvme_io_md": false, 00:12:49.247 "write_zeroes": true, 00:12:49.247 "zcopy": true, 00:12:49.247 "get_zone_info": false, 00:12:49.247 "zone_management": false, 00:12:49.247 "zone_append": false, 00:12:49.247 "compare": false, 00:12:49.247 "compare_and_write": false, 00:12:49.247 "abort": true, 00:12:49.247 "seek_hole": false, 00:12:49.247 "seek_data": false, 00:12:49.247 "copy": true, 00:12:49.247 "nvme_iov_md": false 00:12:49.247 }, 00:12:49.247 "memory_domains": [ 00:12:49.247 { 00:12:49.247 "dma_device_id": "system", 00:12:49.247 "dma_device_type": 1 00:12:49.247 }, 00:12:49.247 { 00:12:49.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.247 "dma_device_type": 2 00:12:49.247 } 00:12:49.247 ], 00:12:49.247 "driver_specific": {} 00:12:49.247 } 00:12:49.247 ] 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.247 "name": "Existed_Raid", 00:12:49.247 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:49.247 "strip_size_kb": 64, 00:12:49.247 "state": "configuring", 00:12:49.247 "raid_level": "concat", 00:12:49.247 "superblock": true, 00:12:49.247 "num_base_bdevs": 4, 00:12:49.247 "num_base_bdevs_discovered": 3, 00:12:49.247 "num_base_bdevs_operational": 4, 00:12:49.247 "base_bdevs_list": [ 00:12:49.247 { 00:12:49.247 "name": "BaseBdev1", 00:12:49.247 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:49.247 "is_configured": true, 00:12:49.247 "data_offset": 2048, 00:12:49.247 "data_size": 63488 00:12:49.247 }, 00:12:49.247 { 00:12:49.247 "name": null, 00:12:49.247 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:49.247 "is_configured": false, 00:12:49.247 "data_offset": 0, 00:12:49.247 "data_size": 63488 00:12:49.247 }, 00:12:49.247 { 00:12:49.247 "name": "BaseBdev3", 00:12:49.247 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:49.247 "is_configured": true, 00:12:49.247 "data_offset": 2048, 00:12:49.247 "data_size": 63488 00:12:49.247 }, 00:12:49.247 { 00:12:49.247 "name": "BaseBdev4", 00:12:49.247 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:49.247 "is_configured": true, 00:12:49.247 "data_offset": 2048, 00:12:49.247 "data_size": 63488 00:12:49.247 } 00:12:49.247 ] 00:12:49.247 }' 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.247 09:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.505 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:49.505 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.505 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.505 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.764 [2024-10-11 09:46:34.187163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.764 "name": "Existed_Raid", 00:12:49.764 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:49.764 "strip_size_kb": 64, 00:12:49.764 "state": "configuring", 00:12:49.764 "raid_level": "concat", 00:12:49.764 "superblock": true, 00:12:49.764 "num_base_bdevs": 4, 00:12:49.764 "num_base_bdevs_discovered": 2, 00:12:49.764 "num_base_bdevs_operational": 4, 00:12:49.764 "base_bdevs_list": [ 00:12:49.764 { 00:12:49.764 "name": "BaseBdev1", 00:12:49.764 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:49.764 "is_configured": true, 00:12:49.764 "data_offset": 2048, 00:12:49.764 "data_size": 63488 00:12:49.764 }, 00:12:49.764 { 00:12:49.764 "name": null, 00:12:49.764 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:49.764 "is_configured": false, 00:12:49.764 "data_offset": 0, 00:12:49.764 "data_size": 63488 00:12:49.764 }, 00:12:49.764 { 00:12:49.764 "name": null, 00:12:49.764 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:49.764 "is_configured": false, 00:12:49.764 "data_offset": 0, 00:12:49.764 "data_size": 63488 00:12:49.764 }, 00:12:49.764 { 00:12:49.764 "name": "BaseBdev4", 00:12:49.764 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:49.764 "is_configured": true, 00:12:49.764 "data_offset": 2048, 00:12:49.764 "data_size": 63488 00:12:49.764 } 00:12:49.764 ] 00:12:49.764 }' 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.764 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.022 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.022 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.022 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 [2024-10-11 09:46:34.674439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.280 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.280 "name": "Existed_Raid", 00:12:50.280 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:50.280 "strip_size_kb": 64, 00:12:50.280 "state": "configuring", 00:12:50.280 "raid_level": "concat", 00:12:50.280 "superblock": true, 00:12:50.280 "num_base_bdevs": 4, 00:12:50.280 "num_base_bdevs_discovered": 3, 00:12:50.280 "num_base_bdevs_operational": 4, 00:12:50.280 "base_bdevs_list": [ 00:12:50.280 { 00:12:50.280 "name": "BaseBdev1", 00:12:50.280 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:50.281 "is_configured": true, 00:12:50.281 "data_offset": 2048, 00:12:50.281 "data_size": 63488 00:12:50.281 }, 00:12:50.281 { 00:12:50.281 "name": null, 00:12:50.281 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:50.281 "is_configured": false, 00:12:50.281 "data_offset": 0, 00:12:50.281 "data_size": 63488 00:12:50.281 }, 00:12:50.281 { 00:12:50.281 "name": "BaseBdev3", 00:12:50.281 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:50.281 "is_configured": true, 00:12:50.281 "data_offset": 2048, 00:12:50.281 "data_size": 63488 00:12:50.281 }, 00:12:50.281 { 00:12:50.281 "name": "BaseBdev4", 00:12:50.281 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:50.281 "is_configured": true, 00:12:50.281 "data_offset": 2048, 00:12:50.281 "data_size": 63488 00:12:50.281 } 00:12:50.281 ] 00:12:50.281 }' 00:12:50.281 09:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.281 09:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.539 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.539 [2024-10-11 09:46:35.169728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.799 "name": "Existed_Raid", 00:12:50.799 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:50.799 "strip_size_kb": 64, 00:12:50.799 "state": "configuring", 00:12:50.799 "raid_level": "concat", 00:12:50.799 "superblock": true, 00:12:50.799 "num_base_bdevs": 4, 00:12:50.799 "num_base_bdevs_discovered": 2, 00:12:50.799 "num_base_bdevs_operational": 4, 00:12:50.799 "base_bdevs_list": [ 00:12:50.799 { 00:12:50.799 "name": null, 00:12:50.799 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:50.799 "is_configured": false, 00:12:50.799 "data_offset": 0, 00:12:50.799 "data_size": 63488 00:12:50.799 }, 00:12:50.799 { 00:12:50.799 "name": null, 00:12:50.799 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:50.799 "is_configured": false, 00:12:50.799 "data_offset": 0, 00:12:50.799 "data_size": 63488 00:12:50.799 }, 00:12:50.799 { 00:12:50.799 "name": "BaseBdev3", 00:12:50.799 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:50.799 "is_configured": true, 00:12:50.799 "data_offset": 2048, 00:12:50.799 "data_size": 63488 00:12:50.799 }, 00:12:50.799 { 00:12:50.799 "name": "BaseBdev4", 00:12:50.799 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:50.799 "is_configured": true, 00:12:50.799 "data_offset": 2048, 00:12:50.799 "data_size": 63488 00:12:50.799 } 00:12:50.799 ] 00:12:50.799 }' 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.799 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.367 [2024-10-11 09:46:35.794931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.367 "name": "Existed_Raid", 00:12:51.367 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:51.367 "strip_size_kb": 64, 00:12:51.367 "state": "configuring", 00:12:51.367 "raid_level": "concat", 00:12:51.367 "superblock": true, 00:12:51.367 "num_base_bdevs": 4, 00:12:51.367 "num_base_bdevs_discovered": 3, 00:12:51.367 "num_base_bdevs_operational": 4, 00:12:51.367 "base_bdevs_list": [ 00:12:51.367 { 00:12:51.367 "name": null, 00:12:51.367 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:51.367 "is_configured": false, 00:12:51.367 "data_offset": 0, 00:12:51.367 "data_size": 63488 00:12:51.367 }, 00:12:51.367 { 00:12:51.367 "name": "BaseBdev2", 00:12:51.367 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:51.367 "is_configured": true, 00:12:51.367 "data_offset": 2048, 00:12:51.367 "data_size": 63488 00:12:51.367 }, 00:12:51.367 { 00:12:51.367 "name": "BaseBdev3", 00:12:51.367 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:51.367 "is_configured": true, 00:12:51.367 "data_offset": 2048, 00:12:51.367 "data_size": 63488 00:12:51.367 }, 00:12:51.367 { 00:12:51.367 "name": "BaseBdev4", 00:12:51.367 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:51.367 "is_configured": true, 00:12:51.367 "data_offset": 2048, 00:12:51.367 "data_size": 63488 00:12:51.367 } 00:12:51.367 ] 00:12:51.367 }' 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.367 09:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.627 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ba2ed66-9af8-44a2-8781-1f4aaf6beea5 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.887 [2024-10-11 09:46:36.329435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:51.887 [2024-10-11 09:46:36.329691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:51.887 [2024-10-11 09:46:36.329704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:51.887 [2024-10-11 09:46:36.330032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:51.887 [2024-10-11 09:46:36.330194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.887 [2024-10-11 09:46:36.330209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:51.887 [2024-10-11 09:46:36.330377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.887 NewBaseBdev 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.887 [ 00:12:51.887 { 00:12:51.887 "name": "NewBaseBdev", 00:12:51.887 "aliases": [ 00:12:51.887 "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5" 00:12:51.887 ], 00:12:51.887 "product_name": "Malloc disk", 00:12:51.887 "block_size": 512, 00:12:51.887 "num_blocks": 65536, 00:12:51.887 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:51.887 "assigned_rate_limits": { 00:12:51.887 "rw_ios_per_sec": 0, 00:12:51.887 "rw_mbytes_per_sec": 0, 00:12:51.887 "r_mbytes_per_sec": 0, 00:12:51.887 "w_mbytes_per_sec": 0 00:12:51.887 }, 00:12:51.887 "claimed": true, 00:12:51.887 "claim_type": "exclusive_write", 00:12:51.887 "zoned": false, 00:12:51.887 "supported_io_types": { 00:12:51.887 "read": true, 00:12:51.887 "write": true, 00:12:51.887 "unmap": true, 00:12:51.887 "flush": true, 00:12:51.887 "reset": true, 00:12:51.887 "nvme_admin": false, 00:12:51.887 "nvme_io": false, 00:12:51.887 "nvme_io_md": false, 00:12:51.887 "write_zeroes": true, 00:12:51.887 "zcopy": true, 00:12:51.887 "get_zone_info": false, 00:12:51.887 "zone_management": false, 00:12:51.887 "zone_append": false, 00:12:51.887 "compare": false, 00:12:51.887 "compare_and_write": false, 00:12:51.887 "abort": true, 00:12:51.887 "seek_hole": false, 00:12:51.887 "seek_data": false, 00:12:51.887 "copy": true, 00:12:51.887 "nvme_iov_md": false 00:12:51.887 }, 00:12:51.887 "memory_domains": [ 00:12:51.887 { 00:12:51.887 "dma_device_id": "system", 00:12:51.887 "dma_device_type": 1 00:12:51.887 }, 00:12:51.887 { 00:12:51.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.887 "dma_device_type": 2 00:12:51.887 } 00:12:51.887 ], 00:12:51.887 "driver_specific": {} 00:12:51.887 } 00:12:51.887 ] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.887 "name": "Existed_Raid", 00:12:51.887 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:51.887 "strip_size_kb": 64, 00:12:51.887 "state": "online", 00:12:51.887 "raid_level": "concat", 00:12:51.887 "superblock": true, 00:12:51.887 "num_base_bdevs": 4, 00:12:51.887 "num_base_bdevs_discovered": 4, 00:12:51.887 "num_base_bdevs_operational": 4, 00:12:51.887 "base_bdevs_list": [ 00:12:51.887 { 00:12:51.887 "name": "NewBaseBdev", 00:12:51.887 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:51.887 "is_configured": true, 00:12:51.887 "data_offset": 2048, 00:12:51.887 "data_size": 63488 00:12:51.887 }, 00:12:51.887 { 00:12:51.887 "name": "BaseBdev2", 00:12:51.887 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:51.887 "is_configured": true, 00:12:51.887 "data_offset": 2048, 00:12:51.887 "data_size": 63488 00:12:51.887 }, 00:12:51.887 { 00:12:51.887 "name": "BaseBdev3", 00:12:51.887 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:51.887 "is_configured": true, 00:12:51.887 "data_offset": 2048, 00:12:51.887 "data_size": 63488 00:12:51.887 }, 00:12:51.887 { 00:12:51.887 "name": "BaseBdev4", 00:12:51.887 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:51.887 "is_configured": true, 00:12:51.887 "data_offset": 2048, 00:12:51.887 "data_size": 63488 00:12:51.887 } 00:12:51.887 ] 00:12:51.887 }' 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.887 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 [2024-10-11 09:46:36.881001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.458 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:52.458 "name": "Existed_Raid", 00:12:52.458 "aliases": [ 00:12:52.458 "be96a78c-cc72-458c-bb9e-226ade3855de" 00:12:52.458 ], 00:12:52.458 "product_name": "Raid Volume", 00:12:52.458 "block_size": 512, 00:12:52.458 "num_blocks": 253952, 00:12:52.458 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:52.458 "assigned_rate_limits": { 00:12:52.458 "rw_ios_per_sec": 0, 00:12:52.458 "rw_mbytes_per_sec": 0, 00:12:52.458 "r_mbytes_per_sec": 0, 00:12:52.458 "w_mbytes_per_sec": 0 00:12:52.458 }, 00:12:52.458 "claimed": false, 00:12:52.458 "zoned": false, 00:12:52.458 "supported_io_types": { 00:12:52.458 "read": true, 00:12:52.458 "write": true, 00:12:52.458 "unmap": true, 00:12:52.458 "flush": true, 00:12:52.458 "reset": true, 00:12:52.458 "nvme_admin": false, 00:12:52.458 "nvme_io": false, 00:12:52.458 "nvme_io_md": false, 00:12:52.458 "write_zeroes": true, 00:12:52.458 "zcopy": false, 00:12:52.458 "get_zone_info": false, 00:12:52.458 "zone_management": false, 00:12:52.458 "zone_append": false, 00:12:52.458 "compare": false, 00:12:52.458 "compare_and_write": false, 00:12:52.458 "abort": false, 00:12:52.458 "seek_hole": false, 00:12:52.458 "seek_data": false, 00:12:52.458 "copy": false, 00:12:52.458 "nvme_iov_md": false 00:12:52.458 }, 00:12:52.458 "memory_domains": [ 00:12:52.458 { 00:12:52.458 "dma_device_id": "system", 00:12:52.458 "dma_device_type": 1 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.458 "dma_device_type": 2 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "system", 00:12:52.458 "dma_device_type": 1 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.458 "dma_device_type": 2 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "system", 00:12:52.458 "dma_device_type": 1 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.458 "dma_device_type": 2 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "system", 00:12:52.458 "dma_device_type": 1 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.458 "dma_device_type": 2 00:12:52.458 } 00:12:52.458 ], 00:12:52.458 "driver_specific": { 00:12:52.458 "raid": { 00:12:52.458 "uuid": "be96a78c-cc72-458c-bb9e-226ade3855de", 00:12:52.458 "strip_size_kb": 64, 00:12:52.458 "state": "online", 00:12:52.458 "raid_level": "concat", 00:12:52.458 "superblock": true, 00:12:52.458 "num_base_bdevs": 4, 00:12:52.458 "num_base_bdevs_discovered": 4, 00:12:52.458 "num_base_bdevs_operational": 4, 00:12:52.458 "base_bdevs_list": [ 00:12:52.458 { 00:12:52.458 "name": "NewBaseBdev", 00:12:52.458 "uuid": "4ba2ed66-9af8-44a2-8781-1f4aaf6beea5", 00:12:52.458 "is_configured": true, 00:12:52.458 "data_offset": 2048, 00:12:52.458 "data_size": 63488 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "name": "BaseBdev2", 00:12:52.458 "uuid": "6f7abb0c-1704-412a-95f5-98eb25354f4f", 00:12:52.458 "is_configured": true, 00:12:52.458 "data_offset": 2048, 00:12:52.458 "data_size": 63488 00:12:52.458 }, 00:12:52.459 { 00:12:52.459 "name": "BaseBdev3", 00:12:52.459 "uuid": "0f08eb5d-f548-4b60-8dc3-3a10014b2846", 00:12:52.459 "is_configured": true, 00:12:52.459 "data_offset": 2048, 00:12:52.459 "data_size": 63488 00:12:52.459 }, 00:12:52.459 { 00:12:52.459 "name": "BaseBdev4", 00:12:52.459 "uuid": "251d9f05-f894-467e-a22f-78e32785a603", 00:12:52.459 "is_configured": true, 00:12:52.459 "data_offset": 2048, 00:12:52.459 "data_size": 63488 00:12:52.459 } 00:12:52.459 ] 00:12:52.459 } 00:12:52.459 } 00:12:52.459 }' 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:52.459 BaseBdev2 00:12:52.459 BaseBdev3 00:12:52.459 BaseBdev4' 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.459 09:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.459 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.719 [2024-10-11 09:46:37.164065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.719 [2024-10-11 09:46:37.164144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.719 [2024-10-11 09:46:37.164253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.719 [2024-10-11 09:46:37.164345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.719 [2024-10-11 09:46:37.164357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72451 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72451 ']' 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72451 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72451 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72451' 00:12:52.719 killing process with pid 72451 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72451 00:12:52.719 [2024-10-11 09:46:37.216000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.719 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72451 00:12:52.979 [2024-10-11 09:46:37.608212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.360 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:54.360 00:12:54.360 real 0m11.964s 00:12:54.360 user 0m18.846s 00:12:54.360 sys 0m2.322s 00:12:54.360 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.360 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.360 ************************************ 00:12:54.360 END TEST raid_state_function_test_sb 00:12:54.360 ************************************ 00:12:54.360 09:46:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:54.360 09:46:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:54.360 09:46:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.360 09:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.360 ************************************ 00:12:54.360 START TEST raid_superblock_test 00:12:54.360 ************************************ 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73119 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73119 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73119 ']' 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.360 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.360 [2024-10-11 09:46:38.940013] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:54.360 [2024-10-11 09:46:38.940249] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73119 ] 00:12:54.620 [2024-10-11 09:46:39.094298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.620 [2024-10-11 09:46:39.222490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.879 [2024-10-11 09:46:39.454203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.879 [2024-10-11 09:46:39.454266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 malloc1 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 [2024-10-11 09:46:39.836465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.449 [2024-10-11 09:46:39.836643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.449 [2024-10-11 09:46:39.836724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:55.449 [2024-10-11 09:46:39.836788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.449 [2024-10-11 09:46:39.839003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.449 [2024-10-11 09:46:39.839082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.449 pt1 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 malloc2 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 [2024-10-11 09:46:39.900569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.449 [2024-10-11 09:46:39.900689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.449 [2024-10-11 09:46:39.900767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:55.449 [2024-10-11 09:46:39.900813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.449 [2024-10-11 09:46:39.903166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.449 [2024-10-11 09:46:39.903251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.449 pt2 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 malloc3 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 [2024-10-11 09:46:39.981666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:55.449 [2024-10-11 09:46:39.981778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.449 [2024-10-11 09:46:39.981823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:55.449 [2024-10-11 09:46:39.981860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.449 [2024-10-11 09:46:39.984264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.449 [2024-10-11 09:46:39.984352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:55.449 pt3 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 malloc4 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 [2024-10-11 09:46:40.044353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:55.449 [2024-10-11 09:46:40.044463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.449 [2024-10-11 09:46:40.044523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:55.449 [2024-10-11 09:46:40.044565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.449 [2024-10-11 09:46:40.046912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.449 [2024-10-11 09:46:40.046994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:55.449 pt4 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.449 [2024-10-11 09:46:40.056392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.449 [2024-10-11 09:46:40.058254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.449 [2024-10-11 09:46:40.058311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:55.449 [2024-10-11 09:46:40.058371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:55.449 [2024-10-11 09:46:40.058557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:55.449 [2024-10-11 09:46:40.058571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:55.449 [2024-10-11 09:46:40.058894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:55.449 [2024-10-11 09:46:40.059135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:55.449 [2024-10-11 09:46:40.059196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:55.449 [2024-10-11 09:46:40.059420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.449 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.450 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.709 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.709 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.709 "name": "raid_bdev1", 00:12:55.709 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:55.709 "strip_size_kb": 64, 00:12:55.710 "state": "online", 00:12:55.710 "raid_level": "concat", 00:12:55.710 "superblock": true, 00:12:55.710 "num_base_bdevs": 4, 00:12:55.710 "num_base_bdevs_discovered": 4, 00:12:55.710 "num_base_bdevs_operational": 4, 00:12:55.710 "base_bdevs_list": [ 00:12:55.710 { 00:12:55.710 "name": "pt1", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.710 "is_configured": true, 00:12:55.710 "data_offset": 2048, 00:12:55.710 "data_size": 63488 00:12:55.710 }, 00:12:55.710 { 00:12:55.710 "name": "pt2", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.710 "is_configured": true, 00:12:55.710 "data_offset": 2048, 00:12:55.710 "data_size": 63488 00:12:55.710 }, 00:12:55.710 { 00:12:55.710 "name": "pt3", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.710 "is_configured": true, 00:12:55.710 "data_offset": 2048, 00:12:55.710 "data_size": 63488 00:12:55.710 }, 00:12:55.710 { 00:12:55.710 "name": "pt4", 00:12:55.710 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.710 "is_configured": true, 00:12:55.710 "data_offset": 2048, 00:12:55.710 "data_size": 63488 00:12:55.710 } 00:12:55.710 ] 00:12:55.710 }' 00:12:55.710 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.710 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.969 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.969 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.969 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.970 [2024-10-11 09:46:40.536071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.970 "name": "raid_bdev1", 00:12:55.970 "aliases": [ 00:12:55.970 "be0510a0-1f35-4abc-8667-2a7a4dd77fc0" 00:12:55.970 ], 00:12:55.970 "product_name": "Raid Volume", 00:12:55.970 "block_size": 512, 00:12:55.970 "num_blocks": 253952, 00:12:55.970 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:55.970 "assigned_rate_limits": { 00:12:55.970 "rw_ios_per_sec": 0, 00:12:55.970 "rw_mbytes_per_sec": 0, 00:12:55.970 "r_mbytes_per_sec": 0, 00:12:55.970 "w_mbytes_per_sec": 0 00:12:55.970 }, 00:12:55.970 "claimed": false, 00:12:55.970 "zoned": false, 00:12:55.970 "supported_io_types": { 00:12:55.970 "read": true, 00:12:55.970 "write": true, 00:12:55.970 "unmap": true, 00:12:55.970 "flush": true, 00:12:55.970 "reset": true, 00:12:55.970 "nvme_admin": false, 00:12:55.970 "nvme_io": false, 00:12:55.970 "nvme_io_md": false, 00:12:55.970 "write_zeroes": true, 00:12:55.970 "zcopy": false, 00:12:55.970 "get_zone_info": false, 00:12:55.970 "zone_management": false, 00:12:55.970 "zone_append": false, 00:12:55.970 "compare": false, 00:12:55.970 "compare_and_write": false, 00:12:55.970 "abort": false, 00:12:55.970 "seek_hole": false, 00:12:55.970 "seek_data": false, 00:12:55.970 "copy": false, 00:12:55.970 "nvme_iov_md": false 00:12:55.970 }, 00:12:55.970 "memory_domains": [ 00:12:55.970 { 00:12:55.970 "dma_device_id": "system", 00:12:55.970 "dma_device_type": 1 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.970 "dma_device_type": 2 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "system", 00:12:55.970 "dma_device_type": 1 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.970 "dma_device_type": 2 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "system", 00:12:55.970 "dma_device_type": 1 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.970 "dma_device_type": 2 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "system", 00:12:55.970 "dma_device_type": 1 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.970 "dma_device_type": 2 00:12:55.970 } 00:12:55.970 ], 00:12:55.970 "driver_specific": { 00:12:55.970 "raid": { 00:12:55.970 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:55.970 "strip_size_kb": 64, 00:12:55.970 "state": "online", 00:12:55.970 "raid_level": "concat", 00:12:55.970 "superblock": true, 00:12:55.970 "num_base_bdevs": 4, 00:12:55.970 "num_base_bdevs_discovered": 4, 00:12:55.970 "num_base_bdevs_operational": 4, 00:12:55.970 "base_bdevs_list": [ 00:12:55.970 { 00:12:55.970 "name": "pt1", 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.970 "is_configured": true, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "name": "pt2", 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.970 "is_configured": true, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "name": "pt3", 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.970 "is_configured": true, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "name": "pt4", 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.970 "is_configured": true, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 } 00:12:55.970 ] 00:12:55.970 } 00:12:55.970 } 00:12:55.970 }' 00:12:55.970 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:56.230 pt2 00:12:56.230 pt3 00:12:56.230 pt4' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.230 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 [2024-10-11 09:46:40.887307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=be0510a0-1f35-4abc-8667-2a7a4dd77fc0 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z be0510a0-1f35-4abc-8667-2a7a4dd77fc0 ']' 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 [2024-10-11 09:46:40.930934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.490 [2024-10-11 09:46:40.931004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.490 [2024-10-11 09:46:40.931125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.490 [2024-10-11 09:46:40.931238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.490 [2024-10-11 09:46:40.931304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 [2024-10-11 09:46:41.062762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:56.490 [2024-10-11 09:46:41.064773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:56.490 [2024-10-11 09:46:41.064822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:56.490 [2024-10-11 09:46:41.064860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:56.490 [2024-10-11 09:46:41.064925] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:56.490 [2024-10-11 09:46:41.064977] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:56.490 [2024-10-11 09:46:41.064995] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:56.490 [2024-10-11 09:46:41.065014] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:56.490 [2024-10-11 09:46:41.065027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.490 [2024-10-11 09:46:41.065039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:56.490 request: 00:12:56.490 { 00:12:56.490 "name": "raid_bdev1", 00:12:56.490 "raid_level": "concat", 00:12:56.490 "base_bdevs": [ 00:12:56.490 "malloc1", 00:12:56.490 "malloc2", 00:12:56.490 "malloc3", 00:12:56.490 "malloc4" 00:12:56.490 ], 00:12:56.490 "strip_size_kb": 64, 00:12:56.490 "superblock": false, 00:12:56.490 "method": "bdev_raid_create", 00:12:56.490 "req_id": 1 00:12:56.490 } 00:12:56.490 Got JSON-RPC error response 00:12:56.490 response: 00:12:56.490 { 00:12:56.490 "code": -17, 00:12:56.490 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:56.490 } 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.490 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.490 [2024-10-11 09:46:41.118617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.490 [2024-10-11 09:46:41.118727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.490 [2024-10-11 09:46:41.118791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:56.490 [2024-10-11 09:46:41.118831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.750 [2024-10-11 09:46:41.121218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.750 [2024-10-11 09:46:41.121310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.750 [2024-10-11 09:46:41.121465] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:56.750 [2024-10-11 09:46:41.121596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.750 pt1 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.750 "name": "raid_bdev1", 00:12:56.750 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:56.750 "strip_size_kb": 64, 00:12:56.750 "state": "configuring", 00:12:56.750 "raid_level": "concat", 00:12:56.750 "superblock": true, 00:12:56.750 "num_base_bdevs": 4, 00:12:56.750 "num_base_bdevs_discovered": 1, 00:12:56.750 "num_base_bdevs_operational": 4, 00:12:56.750 "base_bdevs_list": [ 00:12:56.750 { 00:12:56.750 "name": "pt1", 00:12:56.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.750 "is_configured": true, 00:12:56.750 "data_offset": 2048, 00:12:56.750 "data_size": 63488 00:12:56.750 }, 00:12:56.750 { 00:12:56.750 "name": null, 00:12:56.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.750 "is_configured": false, 00:12:56.750 "data_offset": 2048, 00:12:56.750 "data_size": 63488 00:12:56.750 }, 00:12:56.750 { 00:12:56.750 "name": null, 00:12:56.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.750 "is_configured": false, 00:12:56.750 "data_offset": 2048, 00:12:56.750 "data_size": 63488 00:12:56.750 }, 00:12:56.750 { 00:12:56.750 "name": null, 00:12:56.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.750 "is_configured": false, 00:12:56.750 "data_offset": 2048, 00:12:56.750 "data_size": 63488 00:12:56.750 } 00:12:56.750 ] 00:12:56.750 }' 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.750 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.010 [2024-10-11 09:46:41.573860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.010 [2024-10-11 09:46:41.573931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.010 [2024-10-11 09:46:41.573954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:57.010 [2024-10-11 09:46:41.573965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.010 [2024-10-11 09:46:41.574442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.010 [2024-10-11 09:46:41.574479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.010 [2024-10-11 09:46:41.574583] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.010 [2024-10-11 09:46:41.574611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.010 pt2 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.010 [2024-10-11 09:46:41.581844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.010 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.010 "name": "raid_bdev1", 00:12:57.010 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:57.010 "strip_size_kb": 64, 00:12:57.011 "state": "configuring", 00:12:57.011 "raid_level": "concat", 00:12:57.011 "superblock": true, 00:12:57.011 "num_base_bdevs": 4, 00:12:57.011 "num_base_bdevs_discovered": 1, 00:12:57.011 "num_base_bdevs_operational": 4, 00:12:57.011 "base_bdevs_list": [ 00:12:57.011 { 00:12:57.011 "name": "pt1", 00:12:57.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.011 "is_configured": true, 00:12:57.011 "data_offset": 2048, 00:12:57.011 "data_size": 63488 00:12:57.011 }, 00:12:57.011 { 00:12:57.011 "name": null, 00:12:57.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.011 "is_configured": false, 00:12:57.011 "data_offset": 0, 00:12:57.011 "data_size": 63488 00:12:57.011 }, 00:12:57.011 { 00:12:57.011 "name": null, 00:12:57.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.011 "is_configured": false, 00:12:57.011 "data_offset": 2048, 00:12:57.011 "data_size": 63488 00:12:57.011 }, 00:12:57.011 { 00:12:57.011 "name": null, 00:12:57.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.011 "is_configured": false, 00:12:57.011 "data_offset": 2048, 00:12:57.011 "data_size": 63488 00:12:57.011 } 00:12:57.011 ] 00:12:57.011 }' 00:12:57.011 09:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.011 09:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.587 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:57.587 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.587 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.587 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.587 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.587 [2024-10-11 09:46:42.037086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.587 [2024-10-11 09:46:42.037229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.587 [2024-10-11 09:46:42.037290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:57.587 [2024-10-11 09:46:42.037332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.588 [2024-10-11 09:46:42.037869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.588 [2024-10-11 09:46:42.037939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.588 [2024-10-11 09:46:42.038087] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.588 [2024-10-11 09:46:42.038146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.588 pt2 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 [2024-10-11 09:46:42.049022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.588 [2024-10-11 09:46:42.049128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.588 [2024-10-11 09:46:42.049176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:57.588 [2024-10-11 09:46:42.049218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.588 [2024-10-11 09:46:42.049629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.588 [2024-10-11 09:46:42.049689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.588 [2024-10-11 09:46:42.049803] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:57.588 [2024-10-11 09:46:42.049859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.588 pt3 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 [2024-10-11 09:46:42.060984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:57.588 [2024-10-11 09:46:42.061027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.588 [2024-10-11 09:46:42.061045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:57.588 [2024-10-11 09:46:42.061053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.588 [2024-10-11 09:46:42.061380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.588 [2024-10-11 09:46:42.061395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:57.588 [2024-10-11 09:46:42.061455] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:57.588 [2024-10-11 09:46:42.061471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:57.588 [2024-10-11 09:46:42.061606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.588 [2024-10-11 09:46:42.061615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:57.588 [2024-10-11 09:46:42.061894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:57.588 [2024-10-11 09:46:42.062031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.588 [2024-10-11 09:46:42.062044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:57.588 [2024-10-11 09:46:42.062180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.588 pt4 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.588 "name": "raid_bdev1", 00:12:57.588 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:57.588 "strip_size_kb": 64, 00:12:57.588 "state": "online", 00:12:57.588 "raid_level": "concat", 00:12:57.588 "superblock": true, 00:12:57.588 "num_base_bdevs": 4, 00:12:57.588 "num_base_bdevs_discovered": 4, 00:12:57.588 "num_base_bdevs_operational": 4, 00:12:57.588 "base_bdevs_list": [ 00:12:57.588 { 00:12:57.588 "name": "pt1", 00:12:57.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.588 "is_configured": true, 00:12:57.588 "data_offset": 2048, 00:12:57.588 "data_size": 63488 00:12:57.588 }, 00:12:57.588 { 00:12:57.588 "name": "pt2", 00:12:57.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.588 "is_configured": true, 00:12:57.588 "data_offset": 2048, 00:12:57.588 "data_size": 63488 00:12:57.588 }, 00:12:57.588 { 00:12:57.588 "name": "pt3", 00:12:57.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.588 "is_configured": true, 00:12:57.588 "data_offset": 2048, 00:12:57.588 "data_size": 63488 00:12:57.588 }, 00:12:57.588 { 00:12:57.588 "name": "pt4", 00:12:57.588 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.588 "is_configured": true, 00:12:57.588 "data_offset": 2048, 00:12:57.588 "data_size": 63488 00:12:57.588 } 00:12:57.588 ] 00:12:57.588 }' 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.588 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.157 [2024-10-11 09:46:42.528658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.157 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.157 "name": "raid_bdev1", 00:12:58.157 "aliases": [ 00:12:58.157 "be0510a0-1f35-4abc-8667-2a7a4dd77fc0" 00:12:58.157 ], 00:12:58.157 "product_name": "Raid Volume", 00:12:58.157 "block_size": 512, 00:12:58.157 "num_blocks": 253952, 00:12:58.157 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:58.157 "assigned_rate_limits": { 00:12:58.157 "rw_ios_per_sec": 0, 00:12:58.157 "rw_mbytes_per_sec": 0, 00:12:58.157 "r_mbytes_per_sec": 0, 00:12:58.157 "w_mbytes_per_sec": 0 00:12:58.157 }, 00:12:58.157 "claimed": false, 00:12:58.157 "zoned": false, 00:12:58.157 "supported_io_types": { 00:12:58.157 "read": true, 00:12:58.157 "write": true, 00:12:58.157 "unmap": true, 00:12:58.157 "flush": true, 00:12:58.157 "reset": true, 00:12:58.157 "nvme_admin": false, 00:12:58.157 "nvme_io": false, 00:12:58.157 "nvme_io_md": false, 00:12:58.157 "write_zeroes": true, 00:12:58.157 "zcopy": false, 00:12:58.157 "get_zone_info": false, 00:12:58.157 "zone_management": false, 00:12:58.157 "zone_append": false, 00:12:58.157 "compare": false, 00:12:58.157 "compare_and_write": false, 00:12:58.157 "abort": false, 00:12:58.157 "seek_hole": false, 00:12:58.157 "seek_data": false, 00:12:58.157 "copy": false, 00:12:58.157 "nvme_iov_md": false 00:12:58.157 }, 00:12:58.157 "memory_domains": [ 00:12:58.157 { 00:12:58.157 "dma_device_id": "system", 00:12:58.157 "dma_device_type": 1 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.157 "dma_device_type": 2 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "system", 00:12:58.157 "dma_device_type": 1 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.157 "dma_device_type": 2 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "system", 00:12:58.157 "dma_device_type": 1 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.157 "dma_device_type": 2 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "system", 00:12:58.157 "dma_device_type": 1 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.157 "dma_device_type": 2 00:12:58.157 } 00:12:58.157 ], 00:12:58.157 "driver_specific": { 00:12:58.157 "raid": { 00:12:58.157 "uuid": "be0510a0-1f35-4abc-8667-2a7a4dd77fc0", 00:12:58.157 "strip_size_kb": 64, 00:12:58.157 "state": "online", 00:12:58.157 "raid_level": "concat", 00:12:58.157 "superblock": true, 00:12:58.157 "num_base_bdevs": 4, 00:12:58.157 "num_base_bdevs_discovered": 4, 00:12:58.157 "num_base_bdevs_operational": 4, 00:12:58.157 "base_bdevs_list": [ 00:12:58.157 { 00:12:58.157 "name": "pt1", 00:12:58.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.157 "is_configured": true, 00:12:58.157 "data_offset": 2048, 00:12:58.157 "data_size": 63488 00:12:58.157 }, 00:12:58.157 { 00:12:58.157 "name": "pt2", 00:12:58.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.158 "is_configured": true, 00:12:58.158 "data_offset": 2048, 00:12:58.158 "data_size": 63488 00:12:58.158 }, 00:12:58.158 { 00:12:58.158 "name": "pt3", 00:12:58.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.158 "is_configured": true, 00:12:58.158 "data_offset": 2048, 00:12:58.158 "data_size": 63488 00:12:58.158 }, 00:12:58.158 { 00:12:58.158 "name": "pt4", 00:12:58.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.158 "is_configured": true, 00:12:58.158 "data_offset": 2048, 00:12:58.158 "data_size": 63488 00:12:58.158 } 00:12:58.158 ] 00:12:58.158 } 00:12:58.158 } 00:12:58.158 }' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:58.158 pt2 00:12:58.158 pt3 00:12:58.158 pt4' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.158 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.418 [2024-10-11 09:46:42.852173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' be0510a0-1f35-4abc-8667-2a7a4dd77fc0 '!=' be0510a0-1f35-4abc-8667-2a7a4dd77fc0 ']' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73119 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73119 ']' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73119 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73119 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73119' 00:12:58.418 killing process with pid 73119 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73119 00:12:58.418 [2024-10-11 09:46:42.937159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.418 [2024-10-11 09:46:42.937263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.418 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73119 00:12:58.418 [2024-10-11 09:46:42.937352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.418 [2024-10-11 09:46:42.937362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:58.986 [2024-10-11 09:46:43.349269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.920 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:59.920 00:12:59.920 real 0m5.681s 00:12:59.920 user 0m8.050s 00:12:59.920 sys 0m1.065s 00:12:59.920 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.920 ************************************ 00:12:59.920 END TEST raid_superblock_test 00:12:59.920 ************************************ 00:12:59.920 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 09:46:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:00.178 09:46:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:00.178 09:46:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.178 09:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 ************************************ 00:13:00.178 START TEST raid_read_error_test 00:13:00.178 ************************************ 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vYDb3ArNAk 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73389 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73389 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73389 ']' 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.178 09:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.178 [2024-10-11 09:46:44.697299] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:00.178 [2024-10-11 09:46:44.697502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73389 ] 00:13:00.438 [2024-10-11 09:46:44.845514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.438 [2024-10-11 09:46:44.967784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.709 [2024-10-11 09:46:45.188245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.709 [2024-10-11 09:46:45.188295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 BaseBdev1_malloc 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.981 true 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.981 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 [2024-10-11 09:46:45.612596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:01.241 [2024-10-11 09:46:45.612744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.241 [2024-10-11 09:46:45.612778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:01.241 [2024-10-11 09:46:45.612791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.241 [2024-10-11 09:46:45.614998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.241 [2024-10-11 09:46:45.615044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.241 BaseBdev1 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 BaseBdev2_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 true 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 [2024-10-11 09:46:45.677127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:01.241 [2024-10-11 09:46:45.677263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.241 [2024-10-11 09:46:45.677290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:01.241 [2024-10-11 09:46:45.677301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.241 [2024-10-11 09:46:45.679453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.241 [2024-10-11 09:46:45.679495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.241 BaseBdev2 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 BaseBdev3_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 true 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.241 [2024-10-11 09:46:45.756931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:01.241 [2024-10-11 09:46:45.757036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.241 [2024-10-11 09:46:45.757063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:01.241 [2024-10-11 09:46:45.757075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.241 [2024-10-11 09:46:45.759300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.241 [2024-10-11 09:46:45.759384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.241 BaseBdev3 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.241 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.242 BaseBdev4_malloc 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.242 true 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.242 [2024-10-11 09:46:45.827197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:01.242 [2024-10-11 09:46:45.827257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.242 [2024-10-11 09:46:45.827280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.242 [2024-10-11 09:46:45.827294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.242 [2024-10-11 09:46:45.829653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.242 [2024-10-11 09:46:45.829777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:01.242 BaseBdev4 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.242 [2024-10-11 09:46:45.839274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.242 [2024-10-11 09:46:45.841384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.242 [2024-10-11 09:46:45.841569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.242 [2024-10-11 09:46:45.841656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.242 [2024-10-11 09:46:45.841927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:01.242 [2024-10-11 09:46:45.841944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:01.242 [2024-10-11 09:46:45.842255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.242 [2024-10-11 09:46:45.842430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:01.242 [2024-10-11 09:46:45.842440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:01.242 [2024-10-11 09:46:45.842637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.242 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.500 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.500 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.500 "name": "raid_bdev1", 00:13:01.500 "uuid": "9768c8e0-e402-4216-8fa6-1bce974a4919", 00:13:01.500 "strip_size_kb": 64, 00:13:01.500 "state": "online", 00:13:01.500 "raid_level": "concat", 00:13:01.500 "superblock": true, 00:13:01.500 "num_base_bdevs": 4, 00:13:01.500 "num_base_bdevs_discovered": 4, 00:13:01.500 "num_base_bdevs_operational": 4, 00:13:01.500 "base_bdevs_list": [ 00:13:01.500 { 00:13:01.500 "name": "BaseBdev1", 00:13:01.500 "uuid": "d3df97d0-5c2d-5503-a9dc-b82478d6dc27", 00:13:01.500 "is_configured": true, 00:13:01.500 "data_offset": 2048, 00:13:01.500 "data_size": 63488 00:13:01.500 }, 00:13:01.500 { 00:13:01.500 "name": "BaseBdev2", 00:13:01.500 "uuid": "87972f7f-381a-570d-97f1-02ee6cb39dab", 00:13:01.500 "is_configured": true, 00:13:01.500 "data_offset": 2048, 00:13:01.500 "data_size": 63488 00:13:01.500 }, 00:13:01.500 { 00:13:01.500 "name": "BaseBdev3", 00:13:01.500 "uuid": "ceff6661-778d-5438-9533-92d1c9634844", 00:13:01.500 "is_configured": true, 00:13:01.500 "data_offset": 2048, 00:13:01.500 "data_size": 63488 00:13:01.500 }, 00:13:01.500 { 00:13:01.500 "name": "BaseBdev4", 00:13:01.500 "uuid": "c81fb964-89e2-5f20-9d4d-5cae4b4ff086", 00:13:01.500 "is_configured": true, 00:13:01.500 "data_offset": 2048, 00:13:01.500 "data_size": 63488 00:13:01.500 } 00:13:01.500 ] 00:13:01.500 }' 00:13:01.500 09:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.500 09:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.759 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:01.759 09:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.759 [2024-10-11 09:46:46.355966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.697 "name": "raid_bdev1", 00:13:02.697 "uuid": "9768c8e0-e402-4216-8fa6-1bce974a4919", 00:13:02.697 "strip_size_kb": 64, 00:13:02.697 "state": "online", 00:13:02.697 "raid_level": "concat", 00:13:02.697 "superblock": true, 00:13:02.697 "num_base_bdevs": 4, 00:13:02.697 "num_base_bdevs_discovered": 4, 00:13:02.697 "num_base_bdevs_operational": 4, 00:13:02.697 "base_bdevs_list": [ 00:13:02.697 { 00:13:02.697 "name": "BaseBdev1", 00:13:02.697 "uuid": "d3df97d0-5c2d-5503-a9dc-b82478d6dc27", 00:13:02.697 "is_configured": true, 00:13:02.697 "data_offset": 2048, 00:13:02.697 "data_size": 63488 00:13:02.697 }, 00:13:02.697 { 00:13:02.697 "name": "BaseBdev2", 00:13:02.697 "uuid": "87972f7f-381a-570d-97f1-02ee6cb39dab", 00:13:02.697 "is_configured": true, 00:13:02.697 "data_offset": 2048, 00:13:02.697 "data_size": 63488 00:13:02.697 }, 00:13:02.697 { 00:13:02.697 "name": "BaseBdev3", 00:13:02.697 "uuid": "ceff6661-778d-5438-9533-92d1c9634844", 00:13:02.697 "is_configured": true, 00:13:02.697 "data_offset": 2048, 00:13:02.697 "data_size": 63488 00:13:02.697 }, 00:13:02.697 { 00:13:02.697 "name": "BaseBdev4", 00:13:02.697 "uuid": "c81fb964-89e2-5f20-9d4d-5cae4b4ff086", 00:13:02.697 "is_configured": true, 00:13:02.697 "data_offset": 2048, 00:13:02.697 "data_size": 63488 00:13:02.697 } 00:13:02.697 ] 00:13:02.697 }' 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.697 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.266 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.266 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.266 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.266 [2024-10-11 09:46:47.732395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.266 [2024-10-11 09:46:47.732485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.266 [2024-10-11 09:46:47.735284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.266 [2024-10-11 09:46:47.735346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.267 [2024-10-11 09:46:47.735394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.267 [2024-10-11 09:46:47.735409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.267 { 00:13:03.267 "results": [ 00:13:03.267 { 00:13:03.267 "job": "raid_bdev1", 00:13:03.267 "core_mask": "0x1", 00:13:03.267 "workload": "randrw", 00:13:03.267 "percentage": 50, 00:13:03.267 "status": "finished", 00:13:03.267 "queue_depth": 1, 00:13:03.267 "io_size": 131072, 00:13:03.267 "runtime": 1.377238, 00:13:03.267 "iops": 14780.306671758985, 00:13:03.267 "mibps": 1847.538333969873, 00:13:03.267 "io_failed": 1, 00:13:03.267 "io_timeout": 0, 00:13:03.267 "avg_latency_us": 94.2189717902257, 00:13:03.267 "min_latency_us": 26.717903930131005, 00:13:03.267 "max_latency_us": 1638.4 00:13:03.267 } 00:13:03.267 ], 00:13:03.267 "core_count": 1 00:13:03.267 } 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73389 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73389 ']' 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73389 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73389 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73389' 00:13:03.267 killing process with pid 73389 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73389 00:13:03.267 [2024-10-11 09:46:47.781624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.267 09:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73389 00:13:03.542 [2024-10-11 09:46:48.111696] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vYDb3ArNAk 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:04.931 00:13:04.931 real 0m4.733s 00:13:04.931 user 0m5.581s 00:13:04.931 sys 0m0.589s 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.931 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.931 ************************************ 00:13:04.931 END TEST raid_read_error_test 00:13:04.931 ************************************ 00:13:04.931 09:46:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:04.931 09:46:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:04.931 09:46:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.931 09:46:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.931 ************************************ 00:13:04.931 START TEST raid_write_error_test 00:13:04.931 ************************************ 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I8gRiBC14b 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73536 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73536 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73536 ']' 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.931 09:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.931 [2024-10-11 09:46:49.496389] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:04.931 [2024-10-11 09:46:49.496521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73536 ] 00:13:05.190 [2024-10-11 09:46:49.662610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.190 [2024-10-11 09:46:49.788518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.450 [2024-10-11 09:46:50.019069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.450 [2024-10-11 09:46:50.019128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 BaseBdev1_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 true 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 [2024-10-11 09:46:50.430453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:06.020 [2024-10-11 09:46:50.430523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.020 [2024-10-11 09:46:50.430551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:06.020 [2024-10-11 09:46:50.430566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.020 [2024-10-11 09:46:50.433174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.020 [2024-10-11 09:46:50.433297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.020 BaseBdev1 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 BaseBdev2_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 true 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 [2024-10-11 09:46:50.505702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:06.020 [2024-10-11 09:46:50.505780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.020 [2024-10-11 09:46:50.505801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:06.020 [2024-10-11 09:46:50.505812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.020 [2024-10-11 09:46:50.508284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.020 [2024-10-11 09:46:50.508331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.020 BaseBdev2 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 BaseBdev3_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 true 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 [2024-10-11 09:46:50.591027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:06.020 [2024-10-11 09:46:50.591103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.020 [2024-10-11 09:46:50.591126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:06.020 [2024-10-11 09:46:50.591137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.020 [2024-10-11 09:46:50.593571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.020 [2024-10-11 09:46:50.593667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:06.020 BaseBdev3 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 BaseBdev4_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.310 true 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.310 [2024-10-11 09:46:50.664658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:06.310 [2024-10-11 09:46:50.664750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.310 [2024-10-11 09:46:50.664776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:06.310 [2024-10-11 09:46:50.664792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.310 [2024-10-11 09:46:50.667299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.310 [2024-10-11 09:46:50.667346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:06.310 BaseBdev4 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.310 [2024-10-11 09:46:50.676687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.310 [2024-10-11 09:46:50.678827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.310 [2024-10-11 09:46:50.678918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.310 [2024-10-11 09:46:50.678992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:06.310 [2024-10-11 09:46:50.679264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:06.310 [2024-10-11 09:46:50.679290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:06.310 [2024-10-11 09:46:50.679591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:06.310 [2024-10-11 09:46:50.679792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:06.310 [2024-10-11 09:46:50.679805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:06.310 [2024-10-11 09:46:50.679988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.310 "name": "raid_bdev1", 00:13:06.310 "uuid": "9e6457e5-6ebd-4ba3-8090-0d36d295a159", 00:13:06.310 "strip_size_kb": 64, 00:13:06.310 "state": "online", 00:13:06.310 "raid_level": "concat", 00:13:06.310 "superblock": true, 00:13:06.310 "num_base_bdevs": 4, 00:13:06.310 "num_base_bdevs_discovered": 4, 00:13:06.310 "num_base_bdevs_operational": 4, 00:13:06.310 "base_bdevs_list": [ 00:13:06.310 { 00:13:06.310 "name": "BaseBdev1", 00:13:06.310 "uuid": "de3d2232-d715-5996-b520-1cabc589d4f4", 00:13:06.310 "is_configured": true, 00:13:06.310 "data_offset": 2048, 00:13:06.310 "data_size": 63488 00:13:06.310 }, 00:13:06.310 { 00:13:06.310 "name": "BaseBdev2", 00:13:06.310 "uuid": "0df9839b-6483-5482-b059-830080b7276b", 00:13:06.310 "is_configured": true, 00:13:06.310 "data_offset": 2048, 00:13:06.310 "data_size": 63488 00:13:06.310 }, 00:13:06.310 { 00:13:06.310 "name": "BaseBdev3", 00:13:06.310 "uuid": "1ecd64ba-5289-54c4-a66e-fb89fab9b240", 00:13:06.310 "is_configured": true, 00:13:06.310 "data_offset": 2048, 00:13:06.310 "data_size": 63488 00:13:06.310 }, 00:13:06.310 { 00:13:06.310 "name": "BaseBdev4", 00:13:06.310 "uuid": "77d15ee5-b46d-5156-abd4-279e6696729f", 00:13:06.310 "is_configured": true, 00:13:06.310 "data_offset": 2048, 00:13:06.310 "data_size": 63488 00:13:06.310 } 00:13:06.310 ] 00:13:06.310 }' 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.310 09:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.569 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:06.569 09:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:06.569 [2024-10-11 09:46:51.197383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.507 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.766 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.766 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.766 "name": "raid_bdev1", 00:13:07.766 "uuid": "9e6457e5-6ebd-4ba3-8090-0d36d295a159", 00:13:07.766 "strip_size_kb": 64, 00:13:07.766 "state": "online", 00:13:07.766 "raid_level": "concat", 00:13:07.766 "superblock": true, 00:13:07.766 "num_base_bdevs": 4, 00:13:07.766 "num_base_bdevs_discovered": 4, 00:13:07.766 "num_base_bdevs_operational": 4, 00:13:07.766 "base_bdevs_list": [ 00:13:07.766 { 00:13:07.766 "name": "BaseBdev1", 00:13:07.766 "uuid": "de3d2232-d715-5996-b520-1cabc589d4f4", 00:13:07.766 "is_configured": true, 00:13:07.766 "data_offset": 2048, 00:13:07.766 "data_size": 63488 00:13:07.766 }, 00:13:07.766 { 00:13:07.766 "name": "BaseBdev2", 00:13:07.766 "uuid": "0df9839b-6483-5482-b059-830080b7276b", 00:13:07.766 "is_configured": true, 00:13:07.766 "data_offset": 2048, 00:13:07.766 "data_size": 63488 00:13:07.766 }, 00:13:07.766 { 00:13:07.766 "name": "BaseBdev3", 00:13:07.766 "uuid": "1ecd64ba-5289-54c4-a66e-fb89fab9b240", 00:13:07.766 "is_configured": true, 00:13:07.766 "data_offset": 2048, 00:13:07.766 "data_size": 63488 00:13:07.766 }, 00:13:07.766 { 00:13:07.766 "name": "BaseBdev4", 00:13:07.766 "uuid": "77d15ee5-b46d-5156-abd4-279e6696729f", 00:13:07.766 "is_configured": true, 00:13:07.766 "data_offset": 2048, 00:13:07.766 "data_size": 63488 00:13:07.766 } 00:13:07.766 ] 00:13:07.766 }' 00:13:07.766 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.766 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.026 [2024-10-11 09:46:52.590220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.026 [2024-10-11 09:46:52.590260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.026 [2024-10-11 09:46:52.593251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.026 [2024-10-11 09:46:52.593320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.026 [2024-10-11 09:46:52.593370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.026 [2024-10-11 09:46:52.593386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:08.026 { 00:13:08.026 "results": [ 00:13:08.026 { 00:13:08.026 "job": "raid_bdev1", 00:13:08.026 "core_mask": "0x1", 00:13:08.026 "workload": "randrw", 00:13:08.026 "percentage": 50, 00:13:08.026 "status": "finished", 00:13:08.026 "queue_depth": 1, 00:13:08.026 "io_size": 131072, 00:13:08.026 "runtime": 1.393399, 00:13:08.026 "iops": 13888.340669112005, 00:13:08.026 "mibps": 1736.0425836390007, 00:13:08.026 "io_failed": 1, 00:13:08.026 "io_timeout": 0, 00:13:08.026 "avg_latency_us": 100.00626575390746, 00:13:08.026 "min_latency_us": 27.72401746724891, 00:13:08.026 "max_latency_us": 1459.5353711790392 00:13:08.026 } 00:13:08.026 ], 00:13:08.026 "core_count": 1 00:13:08.026 } 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73536 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73536 ']' 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73536 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73536 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73536' 00:13:08.026 killing process with pid 73536 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73536 00:13:08.026 [2024-10-11 09:46:52.637500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.026 09:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73536 00:13:08.594 [2024-10-11 09:46:52.969920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I8gRiBC14b 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:09.973 00:13:09.973 real 0m4.822s 00:13:09.973 user 0m5.654s 00:13:09.973 sys 0m0.636s 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.973 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.973 ************************************ 00:13:09.973 END TEST raid_write_error_test 00:13:09.973 ************************************ 00:13:09.973 09:46:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:09.973 09:46:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:09.973 09:46:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:09.973 09:46:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.973 09:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.973 ************************************ 00:13:09.973 START TEST raid_state_function_test 00:13:09.973 ************************************ 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.973 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73675 00:13:09.974 Process raid pid: 73675 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73675' 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73675 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73675 ']' 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.974 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.974 [2024-10-11 09:46:54.388090] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:09.974 [2024-10-11 09:46:54.388307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.974 [2024-10-11 09:46:54.551900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.233 [2024-10-11 09:46:54.693815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.491 [2024-10-11 09:46:54.942899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.491 [2024-10-11 09:46:54.943046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.751 [2024-10-11 09:46:55.285283] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.751 [2024-10-11 09:46:55.285344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.751 [2024-10-11 09:46:55.285356] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.751 [2024-10-11 09:46:55.285367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.751 [2024-10-11 09:46:55.285374] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.751 [2024-10-11 09:46:55.285385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.751 [2024-10-11 09:46:55.285392] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.751 [2024-10-11 09:46:55.285402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.751 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.751 "name": "Existed_Raid", 00:13:10.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.751 "strip_size_kb": 0, 00:13:10.751 "state": "configuring", 00:13:10.751 "raid_level": "raid1", 00:13:10.751 "superblock": false, 00:13:10.752 "num_base_bdevs": 4, 00:13:10.752 "num_base_bdevs_discovered": 0, 00:13:10.752 "num_base_bdevs_operational": 4, 00:13:10.752 "base_bdevs_list": [ 00:13:10.752 { 00:13:10.752 "name": "BaseBdev1", 00:13:10.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.752 "is_configured": false, 00:13:10.752 "data_offset": 0, 00:13:10.752 "data_size": 0 00:13:10.752 }, 00:13:10.752 { 00:13:10.752 "name": "BaseBdev2", 00:13:10.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.752 "is_configured": false, 00:13:10.752 "data_offset": 0, 00:13:10.752 "data_size": 0 00:13:10.752 }, 00:13:10.752 { 00:13:10.752 "name": "BaseBdev3", 00:13:10.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.752 "is_configured": false, 00:13:10.752 "data_offset": 0, 00:13:10.752 "data_size": 0 00:13:10.752 }, 00:13:10.752 { 00:13:10.752 "name": "BaseBdev4", 00:13:10.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.752 "is_configured": false, 00:13:10.752 "data_offset": 0, 00:13:10.752 "data_size": 0 00:13:10.752 } 00:13:10.752 ] 00:13:10.752 }' 00:13:10.752 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.752 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 [2024-10-11 09:46:55.736438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.321 [2024-10-11 09:46:55.736577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 [2024-10-11 09:46:55.748454] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:11.321 [2024-10-11 09:46:55.748557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:11.321 [2024-10-11 09:46:55.748590] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.321 [2024-10-11 09:46:55.748617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.321 [2024-10-11 09:46:55.748638] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.321 [2024-10-11 09:46:55.748662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.321 [2024-10-11 09:46:55.748683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.321 [2024-10-11 09:46:55.748705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 [2024-10-11 09:46:55.806045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.321 BaseBdev1 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 [ 00:13:11.321 { 00:13:11.321 "name": "BaseBdev1", 00:13:11.321 "aliases": [ 00:13:11.321 "9e2fd6d7-2867-4e01-a210-509d8d4d14fe" 00:13:11.321 ], 00:13:11.321 "product_name": "Malloc disk", 00:13:11.321 "block_size": 512, 00:13:11.321 "num_blocks": 65536, 00:13:11.321 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:11.321 "assigned_rate_limits": { 00:13:11.321 "rw_ios_per_sec": 0, 00:13:11.321 "rw_mbytes_per_sec": 0, 00:13:11.321 "r_mbytes_per_sec": 0, 00:13:11.321 "w_mbytes_per_sec": 0 00:13:11.321 }, 00:13:11.321 "claimed": true, 00:13:11.321 "claim_type": "exclusive_write", 00:13:11.321 "zoned": false, 00:13:11.321 "supported_io_types": { 00:13:11.321 "read": true, 00:13:11.321 "write": true, 00:13:11.321 "unmap": true, 00:13:11.321 "flush": true, 00:13:11.321 "reset": true, 00:13:11.321 "nvme_admin": false, 00:13:11.321 "nvme_io": false, 00:13:11.321 "nvme_io_md": false, 00:13:11.321 "write_zeroes": true, 00:13:11.321 "zcopy": true, 00:13:11.321 "get_zone_info": false, 00:13:11.321 "zone_management": false, 00:13:11.321 "zone_append": false, 00:13:11.321 "compare": false, 00:13:11.321 "compare_and_write": false, 00:13:11.321 "abort": true, 00:13:11.321 "seek_hole": false, 00:13:11.321 "seek_data": false, 00:13:11.321 "copy": true, 00:13:11.321 "nvme_iov_md": false 00:13:11.321 }, 00:13:11.321 "memory_domains": [ 00:13:11.321 { 00:13:11.321 "dma_device_id": "system", 00:13:11.321 "dma_device_type": 1 00:13:11.321 }, 00:13:11.321 { 00:13:11.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.321 "dma_device_type": 2 00:13:11.321 } 00:13:11.321 ], 00:13:11.321 "driver_specific": {} 00:13:11.321 } 00:13:11.321 ] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.321 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.321 "name": "Existed_Raid", 00:13:11.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.321 "strip_size_kb": 0, 00:13:11.321 "state": "configuring", 00:13:11.321 "raid_level": "raid1", 00:13:11.321 "superblock": false, 00:13:11.321 "num_base_bdevs": 4, 00:13:11.321 "num_base_bdevs_discovered": 1, 00:13:11.321 "num_base_bdevs_operational": 4, 00:13:11.321 "base_bdevs_list": [ 00:13:11.321 { 00:13:11.321 "name": "BaseBdev1", 00:13:11.321 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:11.321 "is_configured": true, 00:13:11.321 "data_offset": 0, 00:13:11.321 "data_size": 65536 00:13:11.321 }, 00:13:11.321 { 00:13:11.321 "name": "BaseBdev2", 00:13:11.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.321 "is_configured": false, 00:13:11.321 "data_offset": 0, 00:13:11.321 "data_size": 0 00:13:11.321 }, 00:13:11.321 { 00:13:11.321 "name": "BaseBdev3", 00:13:11.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.321 "is_configured": false, 00:13:11.321 "data_offset": 0, 00:13:11.321 "data_size": 0 00:13:11.321 }, 00:13:11.321 { 00:13:11.321 "name": "BaseBdev4", 00:13:11.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.322 "is_configured": false, 00:13:11.322 "data_offset": 0, 00:13:11.322 "data_size": 0 00:13:11.322 } 00:13:11.322 ] 00:13:11.322 }' 00:13:11.322 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.322 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.889 [2024-10-11 09:46:56.245371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.889 [2024-10-11 09:46:56.245529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.889 [2024-10-11 09:46:56.257411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.889 [2024-10-11 09:46:56.259515] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.889 [2024-10-11 09:46:56.259599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.889 [2024-10-11 09:46:56.259629] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.889 [2024-10-11 09:46:56.259654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.889 [2024-10-11 09:46:56.259680] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:11.889 [2024-10-11 09:46:56.259720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.889 "name": "Existed_Raid", 00:13:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.889 "strip_size_kb": 0, 00:13:11.889 "state": "configuring", 00:13:11.889 "raid_level": "raid1", 00:13:11.889 "superblock": false, 00:13:11.889 "num_base_bdevs": 4, 00:13:11.889 "num_base_bdevs_discovered": 1, 00:13:11.889 "num_base_bdevs_operational": 4, 00:13:11.889 "base_bdevs_list": [ 00:13:11.889 { 00:13:11.889 "name": "BaseBdev1", 00:13:11.889 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:11.889 "is_configured": true, 00:13:11.889 "data_offset": 0, 00:13:11.889 "data_size": 65536 00:13:11.889 }, 00:13:11.889 { 00:13:11.889 "name": "BaseBdev2", 00:13:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.889 "is_configured": false, 00:13:11.889 "data_offset": 0, 00:13:11.889 "data_size": 0 00:13:11.889 }, 00:13:11.889 { 00:13:11.889 "name": "BaseBdev3", 00:13:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.889 "is_configured": false, 00:13:11.889 "data_offset": 0, 00:13:11.889 "data_size": 0 00:13:11.889 }, 00:13:11.889 { 00:13:11.889 "name": "BaseBdev4", 00:13:11.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.889 "is_configured": false, 00:13:11.889 "data_offset": 0, 00:13:11.889 "data_size": 0 00:13:11.889 } 00:13:11.889 ] 00:13:11.889 }' 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.889 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.148 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:12.148 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.148 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.148 [2024-10-11 09:46:56.778212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.408 BaseBdev2 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.408 [ 00:13:12.408 { 00:13:12.408 "name": "BaseBdev2", 00:13:12.408 "aliases": [ 00:13:12.408 "8936c893-13b1-466e-84c9-6565f9f3ea23" 00:13:12.408 ], 00:13:12.408 "product_name": "Malloc disk", 00:13:12.408 "block_size": 512, 00:13:12.408 "num_blocks": 65536, 00:13:12.408 "uuid": "8936c893-13b1-466e-84c9-6565f9f3ea23", 00:13:12.408 "assigned_rate_limits": { 00:13:12.408 "rw_ios_per_sec": 0, 00:13:12.408 "rw_mbytes_per_sec": 0, 00:13:12.408 "r_mbytes_per_sec": 0, 00:13:12.408 "w_mbytes_per_sec": 0 00:13:12.408 }, 00:13:12.408 "claimed": true, 00:13:12.408 "claim_type": "exclusive_write", 00:13:12.408 "zoned": false, 00:13:12.408 "supported_io_types": { 00:13:12.408 "read": true, 00:13:12.408 "write": true, 00:13:12.408 "unmap": true, 00:13:12.408 "flush": true, 00:13:12.408 "reset": true, 00:13:12.408 "nvme_admin": false, 00:13:12.408 "nvme_io": false, 00:13:12.408 "nvme_io_md": false, 00:13:12.408 "write_zeroes": true, 00:13:12.408 "zcopy": true, 00:13:12.408 "get_zone_info": false, 00:13:12.408 "zone_management": false, 00:13:12.408 "zone_append": false, 00:13:12.408 "compare": false, 00:13:12.408 "compare_and_write": false, 00:13:12.408 "abort": true, 00:13:12.408 "seek_hole": false, 00:13:12.408 "seek_data": false, 00:13:12.408 "copy": true, 00:13:12.408 "nvme_iov_md": false 00:13:12.408 }, 00:13:12.408 "memory_domains": [ 00:13:12.408 { 00:13:12.408 "dma_device_id": "system", 00:13:12.408 "dma_device_type": 1 00:13:12.408 }, 00:13:12.408 { 00:13:12.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.408 "dma_device_type": 2 00:13:12.408 } 00:13:12.408 ], 00:13:12.408 "driver_specific": {} 00:13:12.408 } 00:13:12.408 ] 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.408 "name": "Existed_Raid", 00:13:12.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.408 "strip_size_kb": 0, 00:13:12.408 "state": "configuring", 00:13:12.408 "raid_level": "raid1", 00:13:12.408 "superblock": false, 00:13:12.408 "num_base_bdevs": 4, 00:13:12.408 "num_base_bdevs_discovered": 2, 00:13:12.408 "num_base_bdevs_operational": 4, 00:13:12.408 "base_bdevs_list": [ 00:13:12.408 { 00:13:12.408 "name": "BaseBdev1", 00:13:12.408 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:12.408 "is_configured": true, 00:13:12.408 "data_offset": 0, 00:13:12.408 "data_size": 65536 00:13:12.408 }, 00:13:12.408 { 00:13:12.408 "name": "BaseBdev2", 00:13:12.408 "uuid": "8936c893-13b1-466e-84c9-6565f9f3ea23", 00:13:12.408 "is_configured": true, 00:13:12.408 "data_offset": 0, 00:13:12.408 "data_size": 65536 00:13:12.408 }, 00:13:12.408 { 00:13:12.408 "name": "BaseBdev3", 00:13:12.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.408 "is_configured": false, 00:13:12.408 "data_offset": 0, 00:13:12.408 "data_size": 0 00:13:12.408 }, 00:13:12.408 { 00:13:12.408 "name": "BaseBdev4", 00:13:12.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.408 "is_configured": false, 00:13:12.408 "data_offset": 0, 00:13:12.408 "data_size": 0 00:13:12.408 } 00:13:12.408 ] 00:13:12.408 }' 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.408 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.977 [2024-10-11 09:46:57.410584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.977 BaseBdev3 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.977 [ 00:13:12.977 { 00:13:12.977 "name": "BaseBdev3", 00:13:12.977 "aliases": [ 00:13:12.977 "b6fee61b-12b4-4586-a3fb-2f94ffd0a40f" 00:13:12.977 ], 00:13:12.977 "product_name": "Malloc disk", 00:13:12.977 "block_size": 512, 00:13:12.977 "num_blocks": 65536, 00:13:12.977 "uuid": "b6fee61b-12b4-4586-a3fb-2f94ffd0a40f", 00:13:12.977 "assigned_rate_limits": { 00:13:12.977 "rw_ios_per_sec": 0, 00:13:12.977 "rw_mbytes_per_sec": 0, 00:13:12.977 "r_mbytes_per_sec": 0, 00:13:12.977 "w_mbytes_per_sec": 0 00:13:12.977 }, 00:13:12.977 "claimed": true, 00:13:12.977 "claim_type": "exclusive_write", 00:13:12.977 "zoned": false, 00:13:12.977 "supported_io_types": { 00:13:12.977 "read": true, 00:13:12.977 "write": true, 00:13:12.977 "unmap": true, 00:13:12.977 "flush": true, 00:13:12.977 "reset": true, 00:13:12.977 "nvme_admin": false, 00:13:12.977 "nvme_io": false, 00:13:12.977 "nvme_io_md": false, 00:13:12.977 "write_zeroes": true, 00:13:12.977 "zcopy": true, 00:13:12.977 "get_zone_info": false, 00:13:12.977 "zone_management": false, 00:13:12.977 "zone_append": false, 00:13:12.977 "compare": false, 00:13:12.977 "compare_and_write": false, 00:13:12.977 "abort": true, 00:13:12.977 "seek_hole": false, 00:13:12.977 "seek_data": false, 00:13:12.977 "copy": true, 00:13:12.977 "nvme_iov_md": false 00:13:12.977 }, 00:13:12.977 "memory_domains": [ 00:13:12.977 { 00:13:12.977 "dma_device_id": "system", 00:13:12.977 "dma_device_type": 1 00:13:12.977 }, 00:13:12.977 { 00:13:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.977 "dma_device_type": 2 00:13:12.977 } 00:13:12.977 ], 00:13:12.977 "driver_specific": {} 00:13:12.977 } 00:13:12.977 ] 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.977 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.977 "name": "Existed_Raid", 00:13:12.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.977 "strip_size_kb": 0, 00:13:12.977 "state": "configuring", 00:13:12.977 "raid_level": "raid1", 00:13:12.977 "superblock": false, 00:13:12.977 "num_base_bdevs": 4, 00:13:12.977 "num_base_bdevs_discovered": 3, 00:13:12.977 "num_base_bdevs_operational": 4, 00:13:12.977 "base_bdevs_list": [ 00:13:12.977 { 00:13:12.977 "name": "BaseBdev1", 00:13:12.977 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:12.977 "is_configured": true, 00:13:12.977 "data_offset": 0, 00:13:12.977 "data_size": 65536 00:13:12.977 }, 00:13:12.977 { 00:13:12.977 "name": "BaseBdev2", 00:13:12.977 "uuid": "8936c893-13b1-466e-84c9-6565f9f3ea23", 00:13:12.977 "is_configured": true, 00:13:12.977 "data_offset": 0, 00:13:12.977 "data_size": 65536 00:13:12.977 }, 00:13:12.977 { 00:13:12.977 "name": "BaseBdev3", 00:13:12.977 "uuid": "b6fee61b-12b4-4586-a3fb-2f94ffd0a40f", 00:13:12.977 "is_configured": true, 00:13:12.977 "data_offset": 0, 00:13:12.977 "data_size": 65536 00:13:12.978 }, 00:13:12.978 { 00:13:12.978 "name": "BaseBdev4", 00:13:12.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.978 "is_configured": false, 00:13:12.978 "data_offset": 0, 00:13:12.978 "data_size": 0 00:13:12.978 } 00:13:12.978 ] 00:13:12.978 }' 00:13:12.978 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.978 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.572 [2024-10-11 09:46:57.968652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.572 [2024-10-11 09:46:57.968713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.572 [2024-10-11 09:46:57.968722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:13.572 [2024-10-11 09:46:57.969060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:13.572 [2024-10-11 09:46:57.969266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.572 [2024-10-11 09:46:57.969282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:13.572 [2024-10-11 09:46:57.969566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.572 BaseBdev4 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.572 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.572 [ 00:13:13.572 { 00:13:13.572 "name": "BaseBdev4", 00:13:13.572 "aliases": [ 00:13:13.572 "dbbc6f78-5161-4b15-bfb2-6a3a8f99ea86" 00:13:13.572 ], 00:13:13.572 "product_name": "Malloc disk", 00:13:13.572 "block_size": 512, 00:13:13.572 "num_blocks": 65536, 00:13:13.572 "uuid": "dbbc6f78-5161-4b15-bfb2-6a3a8f99ea86", 00:13:13.572 "assigned_rate_limits": { 00:13:13.572 "rw_ios_per_sec": 0, 00:13:13.572 "rw_mbytes_per_sec": 0, 00:13:13.572 "r_mbytes_per_sec": 0, 00:13:13.572 "w_mbytes_per_sec": 0 00:13:13.572 }, 00:13:13.572 "claimed": true, 00:13:13.572 "claim_type": "exclusive_write", 00:13:13.572 "zoned": false, 00:13:13.572 "supported_io_types": { 00:13:13.572 "read": true, 00:13:13.572 "write": true, 00:13:13.572 "unmap": true, 00:13:13.572 "flush": true, 00:13:13.572 "reset": true, 00:13:13.572 "nvme_admin": false, 00:13:13.572 "nvme_io": false, 00:13:13.572 "nvme_io_md": false, 00:13:13.572 "write_zeroes": true, 00:13:13.572 "zcopy": true, 00:13:13.572 "get_zone_info": false, 00:13:13.572 "zone_management": false, 00:13:13.572 "zone_append": false, 00:13:13.572 "compare": false, 00:13:13.572 "compare_and_write": false, 00:13:13.572 "abort": true, 00:13:13.572 "seek_hole": false, 00:13:13.572 "seek_data": false, 00:13:13.572 "copy": true, 00:13:13.572 "nvme_iov_md": false 00:13:13.572 }, 00:13:13.572 "memory_domains": [ 00:13:13.572 { 00:13:13.572 "dma_device_id": "system", 00:13:13.572 "dma_device_type": 1 00:13:13.572 }, 00:13:13.572 { 00:13:13.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.572 "dma_device_type": 2 00:13:13.572 } 00:13:13.572 ], 00:13:13.572 "driver_specific": {} 00:13:13.572 } 00:13:13.572 ] 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.572 "name": "Existed_Raid", 00:13:13.572 "uuid": "954e43dc-8f80-4b1d-8c10-84078615e12e", 00:13:13.572 "strip_size_kb": 0, 00:13:13.572 "state": "online", 00:13:13.572 "raid_level": "raid1", 00:13:13.572 "superblock": false, 00:13:13.572 "num_base_bdevs": 4, 00:13:13.572 "num_base_bdevs_discovered": 4, 00:13:13.572 "num_base_bdevs_operational": 4, 00:13:13.572 "base_bdevs_list": [ 00:13:13.572 { 00:13:13.572 "name": "BaseBdev1", 00:13:13.572 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:13.572 "is_configured": true, 00:13:13.572 "data_offset": 0, 00:13:13.572 "data_size": 65536 00:13:13.572 }, 00:13:13.572 { 00:13:13.572 "name": "BaseBdev2", 00:13:13.572 "uuid": "8936c893-13b1-466e-84c9-6565f9f3ea23", 00:13:13.572 "is_configured": true, 00:13:13.572 "data_offset": 0, 00:13:13.572 "data_size": 65536 00:13:13.572 }, 00:13:13.572 { 00:13:13.572 "name": "BaseBdev3", 00:13:13.572 "uuid": "b6fee61b-12b4-4586-a3fb-2f94ffd0a40f", 00:13:13.572 "is_configured": true, 00:13:13.572 "data_offset": 0, 00:13:13.572 "data_size": 65536 00:13:13.572 }, 00:13:13.572 { 00:13:13.572 "name": "BaseBdev4", 00:13:13.572 "uuid": "dbbc6f78-5161-4b15-bfb2-6a3a8f99ea86", 00:13:13.572 "is_configured": true, 00:13:13.572 "data_offset": 0, 00:13:13.572 "data_size": 65536 00:13:13.572 } 00:13:13.572 ] 00:13:13.572 }' 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.572 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.831 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:13.831 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:13.831 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.831 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.831 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.831 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.091 [2024-10-11 09:46:58.468296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.091 "name": "Existed_Raid", 00:13:14.091 "aliases": [ 00:13:14.091 "954e43dc-8f80-4b1d-8c10-84078615e12e" 00:13:14.091 ], 00:13:14.091 "product_name": "Raid Volume", 00:13:14.091 "block_size": 512, 00:13:14.091 "num_blocks": 65536, 00:13:14.091 "uuid": "954e43dc-8f80-4b1d-8c10-84078615e12e", 00:13:14.091 "assigned_rate_limits": { 00:13:14.091 "rw_ios_per_sec": 0, 00:13:14.091 "rw_mbytes_per_sec": 0, 00:13:14.091 "r_mbytes_per_sec": 0, 00:13:14.091 "w_mbytes_per_sec": 0 00:13:14.091 }, 00:13:14.091 "claimed": false, 00:13:14.091 "zoned": false, 00:13:14.091 "supported_io_types": { 00:13:14.091 "read": true, 00:13:14.091 "write": true, 00:13:14.091 "unmap": false, 00:13:14.091 "flush": false, 00:13:14.091 "reset": true, 00:13:14.091 "nvme_admin": false, 00:13:14.091 "nvme_io": false, 00:13:14.091 "nvme_io_md": false, 00:13:14.091 "write_zeroes": true, 00:13:14.091 "zcopy": false, 00:13:14.091 "get_zone_info": false, 00:13:14.091 "zone_management": false, 00:13:14.091 "zone_append": false, 00:13:14.091 "compare": false, 00:13:14.091 "compare_and_write": false, 00:13:14.091 "abort": false, 00:13:14.091 "seek_hole": false, 00:13:14.091 "seek_data": false, 00:13:14.091 "copy": false, 00:13:14.091 "nvme_iov_md": false 00:13:14.091 }, 00:13:14.091 "memory_domains": [ 00:13:14.091 { 00:13:14.091 "dma_device_id": "system", 00:13:14.091 "dma_device_type": 1 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.091 "dma_device_type": 2 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "system", 00:13:14.091 "dma_device_type": 1 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.091 "dma_device_type": 2 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "system", 00:13:14.091 "dma_device_type": 1 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.091 "dma_device_type": 2 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "system", 00:13:14.091 "dma_device_type": 1 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.091 "dma_device_type": 2 00:13:14.091 } 00:13:14.091 ], 00:13:14.091 "driver_specific": { 00:13:14.091 "raid": { 00:13:14.091 "uuid": "954e43dc-8f80-4b1d-8c10-84078615e12e", 00:13:14.091 "strip_size_kb": 0, 00:13:14.091 "state": "online", 00:13:14.091 "raid_level": "raid1", 00:13:14.091 "superblock": false, 00:13:14.091 "num_base_bdevs": 4, 00:13:14.091 "num_base_bdevs_discovered": 4, 00:13:14.091 "num_base_bdevs_operational": 4, 00:13:14.091 "base_bdevs_list": [ 00:13:14.091 { 00:13:14.091 "name": "BaseBdev1", 00:13:14.091 "uuid": "9e2fd6d7-2867-4e01-a210-509d8d4d14fe", 00:13:14.091 "is_configured": true, 00:13:14.091 "data_offset": 0, 00:13:14.091 "data_size": 65536 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "name": "BaseBdev2", 00:13:14.091 "uuid": "8936c893-13b1-466e-84c9-6565f9f3ea23", 00:13:14.091 "is_configured": true, 00:13:14.091 "data_offset": 0, 00:13:14.091 "data_size": 65536 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "name": "BaseBdev3", 00:13:14.091 "uuid": "b6fee61b-12b4-4586-a3fb-2f94ffd0a40f", 00:13:14.091 "is_configured": true, 00:13:14.091 "data_offset": 0, 00:13:14.091 "data_size": 65536 00:13:14.091 }, 00:13:14.091 { 00:13:14.091 "name": "BaseBdev4", 00:13:14.091 "uuid": "dbbc6f78-5161-4b15-bfb2-6a3a8f99ea86", 00:13:14.091 "is_configured": true, 00:13:14.091 "data_offset": 0, 00:13:14.091 "data_size": 65536 00:13:14.091 } 00:13:14.091 ] 00:13:14.091 } 00:13:14.091 } 00:13:14.091 }' 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:14.091 BaseBdev2 00:13:14.091 BaseBdev3 00:13:14.091 BaseBdev4' 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.091 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.092 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.352 [2024-10-11 09:46:58.787498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.352 "name": "Existed_Raid", 00:13:14.352 "uuid": "954e43dc-8f80-4b1d-8c10-84078615e12e", 00:13:14.352 "strip_size_kb": 0, 00:13:14.352 "state": "online", 00:13:14.352 "raid_level": "raid1", 00:13:14.352 "superblock": false, 00:13:14.352 "num_base_bdevs": 4, 00:13:14.352 "num_base_bdevs_discovered": 3, 00:13:14.352 "num_base_bdevs_operational": 3, 00:13:14.352 "base_bdevs_list": [ 00:13:14.352 { 00:13:14.352 "name": null, 00:13:14.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.352 "is_configured": false, 00:13:14.352 "data_offset": 0, 00:13:14.352 "data_size": 65536 00:13:14.352 }, 00:13:14.352 { 00:13:14.352 "name": "BaseBdev2", 00:13:14.352 "uuid": "8936c893-13b1-466e-84c9-6565f9f3ea23", 00:13:14.352 "is_configured": true, 00:13:14.352 "data_offset": 0, 00:13:14.352 "data_size": 65536 00:13:14.352 }, 00:13:14.352 { 00:13:14.352 "name": "BaseBdev3", 00:13:14.352 "uuid": "b6fee61b-12b4-4586-a3fb-2f94ffd0a40f", 00:13:14.352 "is_configured": true, 00:13:14.352 "data_offset": 0, 00:13:14.352 "data_size": 65536 00:13:14.352 }, 00:13:14.352 { 00:13:14.352 "name": "BaseBdev4", 00:13:14.352 "uuid": "dbbc6f78-5161-4b15-bfb2-6a3a8f99ea86", 00:13:14.352 "is_configured": true, 00:13:14.352 "data_offset": 0, 00:13:14.352 "data_size": 65536 00:13:14.352 } 00:13:14.352 ] 00:13:14.352 }' 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.352 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.921 [2024-10-11 09:46:59.401015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:14.921 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.181 [2024-10-11 09:46:59.566310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.181 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.181 [2024-10-11 09:46:59.728549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:15.181 [2024-10-11 09:46:59.728729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.508 [2024-10-11 09:46:59.828698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.508 [2024-10-11 09:46:59.828868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.508 [2024-10-11 09:46:59.828956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 BaseBdev2 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.508 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 [ 00:13:15.508 { 00:13:15.508 "name": "BaseBdev2", 00:13:15.508 "aliases": [ 00:13:15.508 "841d64e0-83c5-4364-9363-8d1b2c14cd31" 00:13:15.508 ], 00:13:15.508 "product_name": "Malloc disk", 00:13:15.508 "block_size": 512, 00:13:15.508 "num_blocks": 65536, 00:13:15.508 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:15.508 "assigned_rate_limits": { 00:13:15.508 "rw_ios_per_sec": 0, 00:13:15.508 "rw_mbytes_per_sec": 0, 00:13:15.508 "r_mbytes_per_sec": 0, 00:13:15.508 "w_mbytes_per_sec": 0 00:13:15.508 }, 00:13:15.508 "claimed": false, 00:13:15.508 "zoned": false, 00:13:15.508 "supported_io_types": { 00:13:15.508 "read": true, 00:13:15.508 "write": true, 00:13:15.508 "unmap": true, 00:13:15.508 "flush": true, 00:13:15.508 "reset": true, 00:13:15.508 "nvme_admin": false, 00:13:15.508 "nvme_io": false, 00:13:15.508 "nvme_io_md": false, 00:13:15.508 "write_zeroes": true, 00:13:15.508 "zcopy": true, 00:13:15.508 "get_zone_info": false, 00:13:15.508 "zone_management": false, 00:13:15.508 "zone_append": false, 00:13:15.508 "compare": false, 00:13:15.508 "compare_and_write": false, 00:13:15.508 "abort": true, 00:13:15.508 "seek_hole": false, 00:13:15.508 "seek_data": false, 00:13:15.508 "copy": true, 00:13:15.508 "nvme_iov_md": false 00:13:15.508 }, 00:13:15.508 "memory_domains": [ 00:13:15.508 { 00:13:15.508 "dma_device_id": "system", 00:13:15.509 "dma_device_type": 1 00:13:15.509 }, 00:13:15.509 { 00:13:15.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.509 "dma_device_type": 2 00:13:15.509 } 00:13:15.509 ], 00:13:15.509 "driver_specific": {} 00:13:15.509 } 00:13:15.509 ] 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.509 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 BaseBdev3 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 [ 00:13:15.509 { 00:13:15.509 "name": "BaseBdev3", 00:13:15.509 "aliases": [ 00:13:15.509 "5830926f-f8de-4e3d-9c69-355ff947aee5" 00:13:15.509 ], 00:13:15.509 "product_name": "Malloc disk", 00:13:15.509 "block_size": 512, 00:13:15.509 "num_blocks": 65536, 00:13:15.509 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:15.509 "assigned_rate_limits": { 00:13:15.509 "rw_ios_per_sec": 0, 00:13:15.509 "rw_mbytes_per_sec": 0, 00:13:15.509 "r_mbytes_per_sec": 0, 00:13:15.509 "w_mbytes_per_sec": 0 00:13:15.509 }, 00:13:15.509 "claimed": false, 00:13:15.509 "zoned": false, 00:13:15.509 "supported_io_types": { 00:13:15.509 "read": true, 00:13:15.509 "write": true, 00:13:15.509 "unmap": true, 00:13:15.509 "flush": true, 00:13:15.509 "reset": true, 00:13:15.509 "nvme_admin": false, 00:13:15.509 "nvme_io": false, 00:13:15.509 "nvme_io_md": false, 00:13:15.509 "write_zeroes": true, 00:13:15.509 "zcopy": true, 00:13:15.509 "get_zone_info": false, 00:13:15.509 "zone_management": false, 00:13:15.509 "zone_append": false, 00:13:15.509 "compare": false, 00:13:15.509 "compare_and_write": false, 00:13:15.509 "abort": true, 00:13:15.509 "seek_hole": false, 00:13:15.509 "seek_data": false, 00:13:15.509 "copy": true, 00:13:15.509 "nvme_iov_md": false 00:13:15.509 }, 00:13:15.509 "memory_domains": [ 00:13:15.509 { 00:13:15.509 "dma_device_id": "system", 00:13:15.509 "dma_device_type": 1 00:13:15.509 }, 00:13:15.509 { 00:13:15.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.509 "dma_device_type": 2 00:13:15.509 } 00:13:15.509 ], 00:13:15.509 "driver_specific": {} 00:13:15.509 } 00:13:15.509 ] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 BaseBdev4 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.509 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 [ 00:13:15.509 { 00:13:15.509 "name": "BaseBdev4", 00:13:15.509 "aliases": [ 00:13:15.509 "21f38fb4-9f82-4814-8dde-eb1b0a3d787c" 00:13:15.509 ], 00:13:15.509 "product_name": "Malloc disk", 00:13:15.509 "block_size": 512, 00:13:15.509 "num_blocks": 65536, 00:13:15.509 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:15.509 "assigned_rate_limits": { 00:13:15.509 "rw_ios_per_sec": 0, 00:13:15.509 "rw_mbytes_per_sec": 0, 00:13:15.509 "r_mbytes_per_sec": 0, 00:13:15.509 "w_mbytes_per_sec": 0 00:13:15.509 }, 00:13:15.509 "claimed": false, 00:13:15.509 "zoned": false, 00:13:15.509 "supported_io_types": { 00:13:15.509 "read": true, 00:13:15.509 "write": true, 00:13:15.509 "unmap": true, 00:13:15.509 "flush": true, 00:13:15.509 "reset": true, 00:13:15.509 "nvme_admin": false, 00:13:15.509 "nvme_io": false, 00:13:15.509 "nvme_io_md": false, 00:13:15.509 "write_zeroes": true, 00:13:15.510 "zcopy": true, 00:13:15.510 "get_zone_info": false, 00:13:15.510 "zone_management": false, 00:13:15.510 "zone_append": false, 00:13:15.510 "compare": false, 00:13:15.510 "compare_and_write": false, 00:13:15.510 "abort": true, 00:13:15.510 "seek_hole": false, 00:13:15.510 "seek_data": false, 00:13:15.510 "copy": true, 00:13:15.510 "nvme_iov_md": false 00:13:15.510 }, 00:13:15.510 "memory_domains": [ 00:13:15.510 { 00:13:15.510 "dma_device_id": "system", 00:13:15.510 "dma_device_type": 1 00:13:15.510 }, 00:13:15.510 { 00:13:15.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.510 "dma_device_type": 2 00:13:15.510 } 00:13:15.769 ], 00:13:15.769 "driver_specific": {} 00:13:15.769 } 00:13:15.769 ] 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.769 [2024-10-11 09:47:00.144074] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.769 [2024-10-11 09:47:00.144129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.769 [2024-10-11 09:47:00.144156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.769 [2024-10-11 09:47:00.146244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.769 [2024-10-11 09:47:00.146295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.769 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.770 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.770 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.770 "name": "Existed_Raid", 00:13:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.770 "strip_size_kb": 0, 00:13:15.770 "state": "configuring", 00:13:15.770 "raid_level": "raid1", 00:13:15.770 "superblock": false, 00:13:15.770 "num_base_bdevs": 4, 00:13:15.770 "num_base_bdevs_discovered": 3, 00:13:15.770 "num_base_bdevs_operational": 4, 00:13:15.770 "base_bdevs_list": [ 00:13:15.770 { 00:13:15.770 "name": "BaseBdev1", 00:13:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.770 "is_configured": false, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 0 00:13:15.770 }, 00:13:15.770 { 00:13:15.770 "name": "BaseBdev2", 00:13:15.770 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:15.770 "is_configured": true, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 65536 00:13:15.770 }, 00:13:15.770 { 00:13:15.770 "name": "BaseBdev3", 00:13:15.770 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:15.770 "is_configured": true, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 65536 00:13:15.770 }, 00:13:15.770 { 00:13:15.770 "name": "BaseBdev4", 00:13:15.770 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:15.770 "is_configured": true, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 65536 00:13:15.770 } 00:13:15.770 ] 00:13:15.770 }' 00:13:15.770 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.770 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.029 [2024-10-11 09:47:00.627381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.029 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.288 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.288 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.288 "name": "Existed_Raid", 00:13:16.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.288 "strip_size_kb": 0, 00:13:16.288 "state": "configuring", 00:13:16.288 "raid_level": "raid1", 00:13:16.288 "superblock": false, 00:13:16.288 "num_base_bdevs": 4, 00:13:16.288 "num_base_bdevs_discovered": 2, 00:13:16.288 "num_base_bdevs_operational": 4, 00:13:16.288 "base_bdevs_list": [ 00:13:16.288 { 00:13:16.288 "name": "BaseBdev1", 00:13:16.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.288 "is_configured": false, 00:13:16.288 "data_offset": 0, 00:13:16.288 "data_size": 0 00:13:16.288 }, 00:13:16.288 { 00:13:16.288 "name": null, 00:13:16.288 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:16.288 "is_configured": false, 00:13:16.288 "data_offset": 0, 00:13:16.288 "data_size": 65536 00:13:16.288 }, 00:13:16.288 { 00:13:16.288 "name": "BaseBdev3", 00:13:16.288 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:16.288 "is_configured": true, 00:13:16.288 "data_offset": 0, 00:13:16.288 "data_size": 65536 00:13:16.288 }, 00:13:16.288 { 00:13:16.288 "name": "BaseBdev4", 00:13:16.288 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:16.288 "is_configured": true, 00:13:16.288 "data_offset": 0, 00:13:16.288 "data_size": 65536 00:13:16.288 } 00:13:16.288 ] 00:13:16.288 }' 00:13:16.288 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.288 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.546 [2024-10-11 09:47:01.154050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.546 BaseBdev1 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.546 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.547 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.547 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:16.547 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.547 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.806 [ 00:13:16.806 { 00:13:16.806 "name": "BaseBdev1", 00:13:16.806 "aliases": [ 00:13:16.806 "f3edf483-3d89-4391-bc42-dceae48073aa" 00:13:16.806 ], 00:13:16.806 "product_name": "Malloc disk", 00:13:16.806 "block_size": 512, 00:13:16.806 "num_blocks": 65536, 00:13:16.806 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:16.806 "assigned_rate_limits": { 00:13:16.806 "rw_ios_per_sec": 0, 00:13:16.806 "rw_mbytes_per_sec": 0, 00:13:16.806 "r_mbytes_per_sec": 0, 00:13:16.806 "w_mbytes_per_sec": 0 00:13:16.806 }, 00:13:16.806 "claimed": true, 00:13:16.806 "claim_type": "exclusive_write", 00:13:16.806 "zoned": false, 00:13:16.806 "supported_io_types": { 00:13:16.806 "read": true, 00:13:16.806 "write": true, 00:13:16.806 "unmap": true, 00:13:16.806 "flush": true, 00:13:16.806 "reset": true, 00:13:16.806 "nvme_admin": false, 00:13:16.806 "nvme_io": false, 00:13:16.806 "nvme_io_md": false, 00:13:16.806 "write_zeroes": true, 00:13:16.806 "zcopy": true, 00:13:16.806 "get_zone_info": false, 00:13:16.806 "zone_management": false, 00:13:16.806 "zone_append": false, 00:13:16.806 "compare": false, 00:13:16.806 "compare_and_write": false, 00:13:16.806 "abort": true, 00:13:16.806 "seek_hole": false, 00:13:16.806 "seek_data": false, 00:13:16.806 "copy": true, 00:13:16.806 "nvme_iov_md": false 00:13:16.806 }, 00:13:16.806 "memory_domains": [ 00:13:16.806 { 00:13:16.806 "dma_device_id": "system", 00:13:16.806 "dma_device_type": 1 00:13:16.806 }, 00:13:16.806 { 00:13:16.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.806 "dma_device_type": 2 00:13:16.806 } 00:13:16.806 ], 00:13:16.806 "driver_specific": {} 00:13:16.806 } 00:13:16.806 ] 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.806 "name": "Existed_Raid", 00:13:16.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.806 "strip_size_kb": 0, 00:13:16.806 "state": "configuring", 00:13:16.806 "raid_level": "raid1", 00:13:16.806 "superblock": false, 00:13:16.806 "num_base_bdevs": 4, 00:13:16.806 "num_base_bdevs_discovered": 3, 00:13:16.806 "num_base_bdevs_operational": 4, 00:13:16.806 "base_bdevs_list": [ 00:13:16.806 { 00:13:16.806 "name": "BaseBdev1", 00:13:16.806 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:16.806 "is_configured": true, 00:13:16.806 "data_offset": 0, 00:13:16.806 "data_size": 65536 00:13:16.806 }, 00:13:16.806 { 00:13:16.806 "name": null, 00:13:16.806 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:16.806 "is_configured": false, 00:13:16.806 "data_offset": 0, 00:13:16.806 "data_size": 65536 00:13:16.806 }, 00:13:16.806 { 00:13:16.806 "name": "BaseBdev3", 00:13:16.806 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:16.806 "is_configured": true, 00:13:16.806 "data_offset": 0, 00:13:16.806 "data_size": 65536 00:13:16.806 }, 00:13:16.806 { 00:13:16.806 "name": "BaseBdev4", 00:13:16.806 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:16.806 "is_configured": true, 00:13:16.806 "data_offset": 0, 00:13:16.806 "data_size": 65536 00:13:16.806 } 00:13:16.806 ] 00:13:16.806 }' 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.806 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.065 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.065 [2024-10-11 09:47:01.689279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.325 "name": "Existed_Raid", 00:13:17.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.325 "strip_size_kb": 0, 00:13:17.325 "state": "configuring", 00:13:17.325 "raid_level": "raid1", 00:13:17.325 "superblock": false, 00:13:17.325 "num_base_bdevs": 4, 00:13:17.325 "num_base_bdevs_discovered": 2, 00:13:17.325 "num_base_bdevs_operational": 4, 00:13:17.325 "base_bdevs_list": [ 00:13:17.325 { 00:13:17.325 "name": "BaseBdev1", 00:13:17.325 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:17.325 "is_configured": true, 00:13:17.325 "data_offset": 0, 00:13:17.325 "data_size": 65536 00:13:17.325 }, 00:13:17.325 { 00:13:17.325 "name": null, 00:13:17.325 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:17.325 "is_configured": false, 00:13:17.325 "data_offset": 0, 00:13:17.325 "data_size": 65536 00:13:17.325 }, 00:13:17.325 { 00:13:17.325 "name": null, 00:13:17.325 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:17.325 "is_configured": false, 00:13:17.325 "data_offset": 0, 00:13:17.325 "data_size": 65536 00:13:17.325 }, 00:13:17.325 { 00:13:17.325 "name": "BaseBdev4", 00:13:17.325 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:17.325 "is_configured": true, 00:13:17.325 "data_offset": 0, 00:13:17.325 "data_size": 65536 00:13:17.325 } 00:13:17.325 ] 00:13:17.325 }' 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.325 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.585 [2024-10-11 09:47:02.192436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.585 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.845 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.845 "name": "Existed_Raid", 00:13:17.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.845 "strip_size_kb": 0, 00:13:17.845 "state": "configuring", 00:13:17.845 "raid_level": "raid1", 00:13:17.845 "superblock": false, 00:13:17.845 "num_base_bdevs": 4, 00:13:17.845 "num_base_bdevs_discovered": 3, 00:13:17.845 "num_base_bdevs_operational": 4, 00:13:17.845 "base_bdevs_list": [ 00:13:17.845 { 00:13:17.845 "name": "BaseBdev1", 00:13:17.845 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:17.845 "is_configured": true, 00:13:17.845 "data_offset": 0, 00:13:17.845 "data_size": 65536 00:13:17.845 }, 00:13:17.845 { 00:13:17.845 "name": null, 00:13:17.845 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:17.845 "is_configured": false, 00:13:17.845 "data_offset": 0, 00:13:17.845 "data_size": 65536 00:13:17.845 }, 00:13:17.845 { 00:13:17.845 "name": "BaseBdev3", 00:13:17.845 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:17.845 "is_configured": true, 00:13:17.845 "data_offset": 0, 00:13:17.845 "data_size": 65536 00:13:17.845 }, 00:13:17.845 { 00:13:17.845 "name": "BaseBdev4", 00:13:17.845 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:17.845 "is_configured": true, 00:13:17.845 "data_offset": 0, 00:13:17.845 "data_size": 65536 00:13:17.845 } 00:13:17.845 ] 00:13:17.845 }' 00:13:17.845 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.845 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.104 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.364 [2024-10-11 09:47:02.739616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.364 "name": "Existed_Raid", 00:13:18.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.364 "strip_size_kb": 0, 00:13:18.364 "state": "configuring", 00:13:18.364 "raid_level": "raid1", 00:13:18.364 "superblock": false, 00:13:18.364 "num_base_bdevs": 4, 00:13:18.364 "num_base_bdevs_discovered": 2, 00:13:18.364 "num_base_bdevs_operational": 4, 00:13:18.364 "base_bdevs_list": [ 00:13:18.364 { 00:13:18.364 "name": null, 00:13:18.364 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:18.364 "is_configured": false, 00:13:18.364 "data_offset": 0, 00:13:18.364 "data_size": 65536 00:13:18.364 }, 00:13:18.364 { 00:13:18.364 "name": null, 00:13:18.364 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:18.364 "is_configured": false, 00:13:18.364 "data_offset": 0, 00:13:18.364 "data_size": 65536 00:13:18.364 }, 00:13:18.364 { 00:13:18.364 "name": "BaseBdev3", 00:13:18.364 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:18.364 "is_configured": true, 00:13:18.364 "data_offset": 0, 00:13:18.364 "data_size": 65536 00:13:18.364 }, 00:13:18.364 { 00:13:18.364 "name": "BaseBdev4", 00:13:18.364 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:18.364 "is_configured": true, 00:13:18.364 "data_offset": 0, 00:13:18.364 "data_size": 65536 00:13:18.364 } 00:13:18.364 ] 00:13:18.364 }' 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.364 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.932 [2024-10-11 09:47:03.327766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.932 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.933 "name": "Existed_Raid", 00:13:18.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.933 "strip_size_kb": 0, 00:13:18.933 "state": "configuring", 00:13:18.933 "raid_level": "raid1", 00:13:18.933 "superblock": false, 00:13:18.933 "num_base_bdevs": 4, 00:13:18.933 "num_base_bdevs_discovered": 3, 00:13:18.933 "num_base_bdevs_operational": 4, 00:13:18.933 "base_bdevs_list": [ 00:13:18.933 { 00:13:18.933 "name": null, 00:13:18.933 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:18.933 "is_configured": false, 00:13:18.933 "data_offset": 0, 00:13:18.933 "data_size": 65536 00:13:18.933 }, 00:13:18.933 { 00:13:18.933 "name": "BaseBdev2", 00:13:18.933 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:18.933 "is_configured": true, 00:13:18.933 "data_offset": 0, 00:13:18.933 "data_size": 65536 00:13:18.933 }, 00:13:18.933 { 00:13:18.933 "name": "BaseBdev3", 00:13:18.933 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:18.933 "is_configured": true, 00:13:18.933 "data_offset": 0, 00:13:18.933 "data_size": 65536 00:13:18.933 }, 00:13:18.933 { 00:13:18.933 "name": "BaseBdev4", 00:13:18.933 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:18.933 "is_configured": true, 00:13:18.933 "data_offset": 0, 00:13:18.933 "data_size": 65536 00:13:18.933 } 00:13:18.933 ] 00:13:18.933 }' 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.933 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.191 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.191 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.191 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.191 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.191 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.191 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3edf483-3d89-4391-bc42-dceae48073aa 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.450 [2024-10-11 09:47:03.894727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:19.450 [2024-10-11 09:47:03.894797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:19.450 [2024-10-11 09:47:03.894808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:19.450 [2024-10-11 09:47:03.895101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:19.450 [2024-10-11 09:47:03.895285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:19.450 [2024-10-11 09:47:03.895296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:19.450 [2024-10-11 09:47:03.895599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.450 NewBaseBdev 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.450 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.450 [ 00:13:19.450 { 00:13:19.450 "name": "NewBaseBdev", 00:13:19.450 "aliases": [ 00:13:19.450 "f3edf483-3d89-4391-bc42-dceae48073aa" 00:13:19.450 ], 00:13:19.450 "product_name": "Malloc disk", 00:13:19.450 "block_size": 512, 00:13:19.450 "num_blocks": 65536, 00:13:19.450 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:19.450 "assigned_rate_limits": { 00:13:19.450 "rw_ios_per_sec": 0, 00:13:19.450 "rw_mbytes_per_sec": 0, 00:13:19.450 "r_mbytes_per_sec": 0, 00:13:19.450 "w_mbytes_per_sec": 0 00:13:19.450 }, 00:13:19.450 "claimed": true, 00:13:19.450 "claim_type": "exclusive_write", 00:13:19.450 "zoned": false, 00:13:19.450 "supported_io_types": { 00:13:19.450 "read": true, 00:13:19.450 "write": true, 00:13:19.450 "unmap": true, 00:13:19.450 "flush": true, 00:13:19.450 "reset": true, 00:13:19.450 "nvme_admin": false, 00:13:19.450 "nvme_io": false, 00:13:19.450 "nvme_io_md": false, 00:13:19.450 "write_zeroes": true, 00:13:19.450 "zcopy": true, 00:13:19.451 "get_zone_info": false, 00:13:19.451 "zone_management": false, 00:13:19.451 "zone_append": false, 00:13:19.451 "compare": false, 00:13:19.451 "compare_and_write": false, 00:13:19.451 "abort": true, 00:13:19.451 "seek_hole": false, 00:13:19.451 "seek_data": false, 00:13:19.451 "copy": true, 00:13:19.451 "nvme_iov_md": false 00:13:19.451 }, 00:13:19.451 "memory_domains": [ 00:13:19.451 { 00:13:19.451 "dma_device_id": "system", 00:13:19.451 "dma_device_type": 1 00:13:19.451 }, 00:13:19.451 { 00:13:19.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.451 "dma_device_type": 2 00:13:19.451 } 00:13:19.451 ], 00:13:19.451 "driver_specific": {} 00:13:19.451 } 00:13:19.451 ] 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.451 "name": "Existed_Raid", 00:13:19.451 "uuid": "2973d31b-ab60-466b-aa78-5a21b497fa12", 00:13:19.451 "strip_size_kb": 0, 00:13:19.451 "state": "online", 00:13:19.451 "raid_level": "raid1", 00:13:19.451 "superblock": false, 00:13:19.451 "num_base_bdevs": 4, 00:13:19.451 "num_base_bdevs_discovered": 4, 00:13:19.451 "num_base_bdevs_operational": 4, 00:13:19.451 "base_bdevs_list": [ 00:13:19.451 { 00:13:19.451 "name": "NewBaseBdev", 00:13:19.451 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:19.451 "is_configured": true, 00:13:19.451 "data_offset": 0, 00:13:19.451 "data_size": 65536 00:13:19.451 }, 00:13:19.451 { 00:13:19.451 "name": "BaseBdev2", 00:13:19.451 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:19.451 "is_configured": true, 00:13:19.451 "data_offset": 0, 00:13:19.451 "data_size": 65536 00:13:19.451 }, 00:13:19.451 { 00:13:19.451 "name": "BaseBdev3", 00:13:19.451 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:19.451 "is_configured": true, 00:13:19.451 "data_offset": 0, 00:13:19.451 "data_size": 65536 00:13:19.451 }, 00:13:19.451 { 00:13:19.451 "name": "BaseBdev4", 00:13:19.451 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:19.451 "is_configured": true, 00:13:19.451 "data_offset": 0, 00:13:19.451 "data_size": 65536 00:13:19.451 } 00:13:19.451 ] 00:13:19.451 }' 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.451 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:20.019 [2024-10-11 09:47:04.418257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:20.019 "name": "Existed_Raid", 00:13:20.019 "aliases": [ 00:13:20.019 "2973d31b-ab60-466b-aa78-5a21b497fa12" 00:13:20.019 ], 00:13:20.019 "product_name": "Raid Volume", 00:13:20.019 "block_size": 512, 00:13:20.019 "num_blocks": 65536, 00:13:20.019 "uuid": "2973d31b-ab60-466b-aa78-5a21b497fa12", 00:13:20.019 "assigned_rate_limits": { 00:13:20.019 "rw_ios_per_sec": 0, 00:13:20.019 "rw_mbytes_per_sec": 0, 00:13:20.019 "r_mbytes_per_sec": 0, 00:13:20.019 "w_mbytes_per_sec": 0 00:13:20.019 }, 00:13:20.019 "claimed": false, 00:13:20.019 "zoned": false, 00:13:20.019 "supported_io_types": { 00:13:20.019 "read": true, 00:13:20.019 "write": true, 00:13:20.019 "unmap": false, 00:13:20.019 "flush": false, 00:13:20.019 "reset": true, 00:13:20.019 "nvme_admin": false, 00:13:20.019 "nvme_io": false, 00:13:20.019 "nvme_io_md": false, 00:13:20.019 "write_zeroes": true, 00:13:20.019 "zcopy": false, 00:13:20.019 "get_zone_info": false, 00:13:20.019 "zone_management": false, 00:13:20.019 "zone_append": false, 00:13:20.019 "compare": false, 00:13:20.019 "compare_and_write": false, 00:13:20.019 "abort": false, 00:13:20.019 "seek_hole": false, 00:13:20.019 "seek_data": false, 00:13:20.019 "copy": false, 00:13:20.019 "nvme_iov_md": false 00:13:20.019 }, 00:13:20.019 "memory_domains": [ 00:13:20.019 { 00:13:20.019 "dma_device_id": "system", 00:13:20.019 "dma_device_type": 1 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.019 "dma_device_type": 2 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "system", 00:13:20.019 "dma_device_type": 1 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.019 "dma_device_type": 2 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "system", 00:13:20.019 "dma_device_type": 1 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.019 "dma_device_type": 2 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "system", 00:13:20.019 "dma_device_type": 1 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.019 "dma_device_type": 2 00:13:20.019 } 00:13:20.019 ], 00:13:20.019 "driver_specific": { 00:13:20.019 "raid": { 00:13:20.019 "uuid": "2973d31b-ab60-466b-aa78-5a21b497fa12", 00:13:20.019 "strip_size_kb": 0, 00:13:20.019 "state": "online", 00:13:20.019 "raid_level": "raid1", 00:13:20.019 "superblock": false, 00:13:20.019 "num_base_bdevs": 4, 00:13:20.019 "num_base_bdevs_discovered": 4, 00:13:20.019 "num_base_bdevs_operational": 4, 00:13:20.019 "base_bdevs_list": [ 00:13:20.019 { 00:13:20.019 "name": "NewBaseBdev", 00:13:20.019 "uuid": "f3edf483-3d89-4391-bc42-dceae48073aa", 00:13:20.019 "is_configured": true, 00:13:20.019 "data_offset": 0, 00:13:20.019 "data_size": 65536 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "name": "BaseBdev2", 00:13:20.019 "uuid": "841d64e0-83c5-4364-9363-8d1b2c14cd31", 00:13:20.019 "is_configured": true, 00:13:20.019 "data_offset": 0, 00:13:20.019 "data_size": 65536 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "name": "BaseBdev3", 00:13:20.019 "uuid": "5830926f-f8de-4e3d-9c69-355ff947aee5", 00:13:20.019 "is_configured": true, 00:13:20.019 "data_offset": 0, 00:13:20.019 "data_size": 65536 00:13:20.019 }, 00:13:20.019 { 00:13:20.019 "name": "BaseBdev4", 00:13:20.019 "uuid": "21f38fb4-9f82-4814-8dde-eb1b0a3d787c", 00:13:20.019 "is_configured": true, 00:13:20.019 "data_offset": 0, 00:13:20.019 "data_size": 65536 00:13:20.019 } 00:13:20.019 ] 00:13:20.019 } 00:13:20.019 } 00:13:20.019 }' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:20.019 BaseBdev2 00:13:20.019 BaseBdev3 00:13:20.019 BaseBdev4' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.019 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.279 [2024-10-11 09:47:04.777306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.279 [2024-10-11 09:47:04.777432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.279 [2024-10-11 09:47:04.777573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.279 [2024-10-11 09:47:04.777934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.279 [2024-10-11 09:47:04.777999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73675 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73675 ']' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73675 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73675 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.279 killing process with pid 73675 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73675' 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73675 00:13:20.279 [2024-10-11 09:47:04.816410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.279 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73675 00:13:20.848 [2024-10-11 09:47:05.218104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.228 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:22.228 00:13:22.228 real 0m12.153s 00:13:22.228 user 0m19.093s 00:13:22.229 sys 0m2.332s 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.229 ************************************ 00:13:22.229 END TEST raid_state_function_test 00:13:22.229 ************************************ 00:13:22.229 09:47:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:22.229 09:47:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:22.229 09:47:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.229 09:47:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.229 ************************************ 00:13:22.229 START TEST raid_state_function_test_sb 00:13:22.229 ************************************ 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74361 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74361' 00:13:22.229 Process raid pid: 74361 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74361 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74361 ']' 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.229 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.229 [2024-10-11 09:47:06.623680] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:22.229 [2024-10-11 09:47:06.623841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.229 [2024-10-11 09:47:06.794999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.488 [2024-10-11 09:47:06.926264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.748 [2024-10-11 09:47:07.152325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.748 [2024-10-11 09:47:07.152372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.009 [2024-10-11 09:47:07.524037] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.009 [2024-10-11 09:47:07.524211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.009 [2024-10-11 09:47:07.524234] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.009 [2024-10-11 09:47:07.524246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.009 [2024-10-11 09:47:07.524254] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.009 [2024-10-11 09:47:07.524265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.009 [2024-10-11 09:47:07.524272] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:23.009 [2024-10-11 09:47:07.524282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.009 "name": "Existed_Raid", 00:13:23.009 "uuid": "95aeeca0-a0cd-4009-9281-eb17f76af9a2", 00:13:23.009 "strip_size_kb": 0, 00:13:23.009 "state": "configuring", 00:13:23.009 "raid_level": "raid1", 00:13:23.009 "superblock": true, 00:13:23.009 "num_base_bdevs": 4, 00:13:23.009 "num_base_bdevs_discovered": 0, 00:13:23.009 "num_base_bdevs_operational": 4, 00:13:23.009 "base_bdevs_list": [ 00:13:23.009 { 00:13:23.009 "name": "BaseBdev1", 00:13:23.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.009 "is_configured": false, 00:13:23.009 "data_offset": 0, 00:13:23.009 "data_size": 0 00:13:23.009 }, 00:13:23.009 { 00:13:23.009 "name": "BaseBdev2", 00:13:23.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.009 "is_configured": false, 00:13:23.009 "data_offset": 0, 00:13:23.009 "data_size": 0 00:13:23.009 }, 00:13:23.009 { 00:13:23.009 "name": "BaseBdev3", 00:13:23.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.009 "is_configured": false, 00:13:23.009 "data_offset": 0, 00:13:23.009 "data_size": 0 00:13:23.009 }, 00:13:23.009 { 00:13:23.009 "name": "BaseBdev4", 00:13:23.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.009 "is_configured": false, 00:13:23.009 "data_offset": 0, 00:13:23.009 "data_size": 0 00:13:23.009 } 00:13:23.009 ] 00:13:23.009 }' 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.009 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 09:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.579 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.579 09:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 [2024-10-11 09:47:08.003209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.579 [2024-10-11 09:47:08.003293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 [2024-10-11 09:47:08.015189] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.579 [2024-10-11 09:47:08.015240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.579 [2024-10-11 09:47:08.015248] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.579 [2024-10-11 09:47:08.015257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.579 [2024-10-11 09:47:08.015264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.579 [2024-10-11 09:47:08.015272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.579 [2024-10-11 09:47:08.015278] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:23.579 [2024-10-11 09:47:08.015286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 [2024-10-11 09:47:08.068056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.579 BaseBdev1 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 [ 00:13:23.579 { 00:13:23.579 "name": "BaseBdev1", 00:13:23.579 "aliases": [ 00:13:23.579 "b52856cd-0161-45a8-9612-d47d07a094cf" 00:13:23.579 ], 00:13:23.579 "product_name": "Malloc disk", 00:13:23.579 "block_size": 512, 00:13:23.579 "num_blocks": 65536, 00:13:23.579 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:23.579 "assigned_rate_limits": { 00:13:23.579 "rw_ios_per_sec": 0, 00:13:23.579 "rw_mbytes_per_sec": 0, 00:13:23.579 "r_mbytes_per_sec": 0, 00:13:23.579 "w_mbytes_per_sec": 0 00:13:23.579 }, 00:13:23.579 "claimed": true, 00:13:23.579 "claim_type": "exclusive_write", 00:13:23.579 "zoned": false, 00:13:23.579 "supported_io_types": { 00:13:23.579 "read": true, 00:13:23.579 "write": true, 00:13:23.579 "unmap": true, 00:13:23.579 "flush": true, 00:13:23.579 "reset": true, 00:13:23.579 "nvme_admin": false, 00:13:23.579 "nvme_io": false, 00:13:23.579 "nvme_io_md": false, 00:13:23.579 "write_zeroes": true, 00:13:23.579 "zcopy": true, 00:13:23.579 "get_zone_info": false, 00:13:23.579 "zone_management": false, 00:13:23.579 "zone_append": false, 00:13:23.579 "compare": false, 00:13:23.579 "compare_and_write": false, 00:13:23.579 "abort": true, 00:13:23.579 "seek_hole": false, 00:13:23.579 "seek_data": false, 00:13:23.579 "copy": true, 00:13:23.579 "nvme_iov_md": false 00:13:23.579 }, 00:13:23.579 "memory_domains": [ 00:13:23.579 { 00:13:23.579 "dma_device_id": "system", 00:13:23.579 "dma_device_type": 1 00:13:23.579 }, 00:13:23.579 { 00:13:23.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.579 "dma_device_type": 2 00:13:23.579 } 00:13:23.579 ], 00:13:23.579 "driver_specific": {} 00:13:23.579 } 00:13:23.579 ] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.579 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.579 "name": "Existed_Raid", 00:13:23.579 "uuid": "a565c70c-7ad0-40e5-a109-45c758761ef7", 00:13:23.579 "strip_size_kb": 0, 00:13:23.579 "state": "configuring", 00:13:23.579 "raid_level": "raid1", 00:13:23.579 "superblock": true, 00:13:23.579 "num_base_bdevs": 4, 00:13:23.579 "num_base_bdevs_discovered": 1, 00:13:23.579 "num_base_bdevs_operational": 4, 00:13:23.579 "base_bdevs_list": [ 00:13:23.579 { 00:13:23.579 "name": "BaseBdev1", 00:13:23.579 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:23.579 "is_configured": true, 00:13:23.579 "data_offset": 2048, 00:13:23.579 "data_size": 63488 00:13:23.579 }, 00:13:23.579 { 00:13:23.579 "name": "BaseBdev2", 00:13:23.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.579 "is_configured": false, 00:13:23.579 "data_offset": 0, 00:13:23.579 "data_size": 0 00:13:23.579 }, 00:13:23.579 { 00:13:23.579 "name": "BaseBdev3", 00:13:23.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.580 "is_configured": false, 00:13:23.580 "data_offset": 0, 00:13:23.580 "data_size": 0 00:13:23.580 }, 00:13:23.580 { 00:13:23.580 "name": "BaseBdev4", 00:13:23.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.580 "is_configured": false, 00:13:23.580 "data_offset": 0, 00:13:23.580 "data_size": 0 00:13:23.580 } 00:13:23.580 ] 00:13:23.580 }' 00:13:23.580 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.580 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.149 [2024-10-11 09:47:08.571277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.149 [2024-10-11 09:47:08.571350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.149 [2024-10-11 09:47:08.583287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.149 [2024-10-11 09:47:08.585130] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.149 [2024-10-11 09:47:08.585174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.149 [2024-10-11 09:47:08.585185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:24.149 [2024-10-11 09:47:08.585198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:24.149 [2024-10-11 09:47:08.585205] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:24.149 [2024-10-11 09:47:08.585212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.149 "name": "Existed_Raid", 00:13:24.149 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:24.149 "strip_size_kb": 0, 00:13:24.149 "state": "configuring", 00:13:24.149 "raid_level": "raid1", 00:13:24.149 "superblock": true, 00:13:24.149 "num_base_bdevs": 4, 00:13:24.149 "num_base_bdevs_discovered": 1, 00:13:24.149 "num_base_bdevs_operational": 4, 00:13:24.149 "base_bdevs_list": [ 00:13:24.149 { 00:13:24.149 "name": "BaseBdev1", 00:13:24.149 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:24.149 "is_configured": true, 00:13:24.149 "data_offset": 2048, 00:13:24.149 "data_size": 63488 00:13:24.149 }, 00:13:24.149 { 00:13:24.149 "name": "BaseBdev2", 00:13:24.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.149 "is_configured": false, 00:13:24.149 "data_offset": 0, 00:13:24.149 "data_size": 0 00:13:24.149 }, 00:13:24.149 { 00:13:24.149 "name": "BaseBdev3", 00:13:24.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.149 "is_configured": false, 00:13:24.149 "data_offset": 0, 00:13:24.149 "data_size": 0 00:13:24.149 }, 00:13:24.149 { 00:13:24.149 "name": "BaseBdev4", 00:13:24.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.149 "is_configured": false, 00:13:24.149 "data_offset": 0, 00:13:24.149 "data_size": 0 00:13:24.149 } 00:13:24.149 ] 00:13:24.149 }' 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.149 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.716 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.716 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.716 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.716 [2024-10-11 09:47:09.139305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.717 BaseBdev2 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.717 [ 00:13:24.717 { 00:13:24.717 "name": "BaseBdev2", 00:13:24.717 "aliases": [ 00:13:24.717 "399fc81c-ad34-4da6-9b58-a4cbea648575" 00:13:24.717 ], 00:13:24.717 "product_name": "Malloc disk", 00:13:24.717 "block_size": 512, 00:13:24.717 "num_blocks": 65536, 00:13:24.717 "uuid": "399fc81c-ad34-4da6-9b58-a4cbea648575", 00:13:24.717 "assigned_rate_limits": { 00:13:24.717 "rw_ios_per_sec": 0, 00:13:24.717 "rw_mbytes_per_sec": 0, 00:13:24.717 "r_mbytes_per_sec": 0, 00:13:24.717 "w_mbytes_per_sec": 0 00:13:24.717 }, 00:13:24.717 "claimed": true, 00:13:24.717 "claim_type": "exclusive_write", 00:13:24.717 "zoned": false, 00:13:24.717 "supported_io_types": { 00:13:24.717 "read": true, 00:13:24.717 "write": true, 00:13:24.717 "unmap": true, 00:13:24.717 "flush": true, 00:13:24.717 "reset": true, 00:13:24.717 "nvme_admin": false, 00:13:24.717 "nvme_io": false, 00:13:24.717 "nvme_io_md": false, 00:13:24.717 "write_zeroes": true, 00:13:24.717 "zcopy": true, 00:13:24.717 "get_zone_info": false, 00:13:24.717 "zone_management": false, 00:13:24.717 "zone_append": false, 00:13:24.717 "compare": false, 00:13:24.717 "compare_and_write": false, 00:13:24.717 "abort": true, 00:13:24.717 "seek_hole": false, 00:13:24.717 "seek_data": false, 00:13:24.717 "copy": true, 00:13:24.717 "nvme_iov_md": false 00:13:24.717 }, 00:13:24.717 "memory_domains": [ 00:13:24.717 { 00:13:24.717 "dma_device_id": "system", 00:13:24.717 "dma_device_type": 1 00:13:24.717 }, 00:13:24.717 { 00:13:24.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.717 "dma_device_type": 2 00:13:24.717 } 00:13:24.717 ], 00:13:24.717 "driver_specific": {} 00:13:24.717 } 00:13:24.717 ] 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.717 "name": "Existed_Raid", 00:13:24.717 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:24.717 "strip_size_kb": 0, 00:13:24.717 "state": "configuring", 00:13:24.717 "raid_level": "raid1", 00:13:24.717 "superblock": true, 00:13:24.717 "num_base_bdevs": 4, 00:13:24.717 "num_base_bdevs_discovered": 2, 00:13:24.717 "num_base_bdevs_operational": 4, 00:13:24.717 "base_bdevs_list": [ 00:13:24.717 { 00:13:24.717 "name": "BaseBdev1", 00:13:24.717 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:24.717 "is_configured": true, 00:13:24.717 "data_offset": 2048, 00:13:24.717 "data_size": 63488 00:13:24.717 }, 00:13:24.717 { 00:13:24.717 "name": "BaseBdev2", 00:13:24.717 "uuid": "399fc81c-ad34-4da6-9b58-a4cbea648575", 00:13:24.717 "is_configured": true, 00:13:24.717 "data_offset": 2048, 00:13:24.717 "data_size": 63488 00:13:24.717 }, 00:13:24.717 { 00:13:24.717 "name": "BaseBdev3", 00:13:24.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.717 "is_configured": false, 00:13:24.717 "data_offset": 0, 00:13:24.717 "data_size": 0 00:13:24.717 }, 00:13:24.717 { 00:13:24.717 "name": "BaseBdev4", 00:13:24.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.717 "is_configured": false, 00:13:24.717 "data_offset": 0, 00:13:24.717 "data_size": 0 00:13:24.717 } 00:13:24.717 ] 00:13:24.717 }' 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.717 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.976 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:24.976 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.976 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.236 [2024-10-11 09:47:09.670264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.236 BaseBdev3 00:13:25.236 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.236 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:25.236 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:25.236 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:25.236 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.237 [ 00:13:25.237 { 00:13:25.237 "name": "BaseBdev3", 00:13:25.237 "aliases": [ 00:13:25.237 "09f18a71-a65c-4c5d-bcc3-e0c8b5170567" 00:13:25.237 ], 00:13:25.237 "product_name": "Malloc disk", 00:13:25.237 "block_size": 512, 00:13:25.237 "num_blocks": 65536, 00:13:25.237 "uuid": "09f18a71-a65c-4c5d-bcc3-e0c8b5170567", 00:13:25.237 "assigned_rate_limits": { 00:13:25.237 "rw_ios_per_sec": 0, 00:13:25.237 "rw_mbytes_per_sec": 0, 00:13:25.237 "r_mbytes_per_sec": 0, 00:13:25.237 "w_mbytes_per_sec": 0 00:13:25.237 }, 00:13:25.237 "claimed": true, 00:13:25.237 "claim_type": "exclusive_write", 00:13:25.237 "zoned": false, 00:13:25.237 "supported_io_types": { 00:13:25.237 "read": true, 00:13:25.237 "write": true, 00:13:25.237 "unmap": true, 00:13:25.237 "flush": true, 00:13:25.237 "reset": true, 00:13:25.237 "nvme_admin": false, 00:13:25.237 "nvme_io": false, 00:13:25.237 "nvme_io_md": false, 00:13:25.237 "write_zeroes": true, 00:13:25.237 "zcopy": true, 00:13:25.237 "get_zone_info": false, 00:13:25.237 "zone_management": false, 00:13:25.237 "zone_append": false, 00:13:25.237 "compare": false, 00:13:25.237 "compare_and_write": false, 00:13:25.237 "abort": true, 00:13:25.237 "seek_hole": false, 00:13:25.237 "seek_data": false, 00:13:25.237 "copy": true, 00:13:25.237 "nvme_iov_md": false 00:13:25.237 }, 00:13:25.237 "memory_domains": [ 00:13:25.237 { 00:13:25.237 "dma_device_id": "system", 00:13:25.237 "dma_device_type": 1 00:13:25.237 }, 00:13:25.237 { 00:13:25.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.237 "dma_device_type": 2 00:13:25.237 } 00:13:25.237 ], 00:13:25.237 "driver_specific": {} 00:13:25.237 } 00:13:25.237 ] 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.237 "name": "Existed_Raid", 00:13:25.237 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:25.237 "strip_size_kb": 0, 00:13:25.237 "state": "configuring", 00:13:25.237 "raid_level": "raid1", 00:13:25.237 "superblock": true, 00:13:25.237 "num_base_bdevs": 4, 00:13:25.237 "num_base_bdevs_discovered": 3, 00:13:25.237 "num_base_bdevs_operational": 4, 00:13:25.237 "base_bdevs_list": [ 00:13:25.237 { 00:13:25.237 "name": "BaseBdev1", 00:13:25.237 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:25.237 "is_configured": true, 00:13:25.237 "data_offset": 2048, 00:13:25.237 "data_size": 63488 00:13:25.237 }, 00:13:25.237 { 00:13:25.237 "name": "BaseBdev2", 00:13:25.237 "uuid": "399fc81c-ad34-4da6-9b58-a4cbea648575", 00:13:25.237 "is_configured": true, 00:13:25.237 "data_offset": 2048, 00:13:25.237 "data_size": 63488 00:13:25.237 }, 00:13:25.237 { 00:13:25.237 "name": "BaseBdev3", 00:13:25.237 "uuid": "09f18a71-a65c-4c5d-bcc3-e0c8b5170567", 00:13:25.237 "is_configured": true, 00:13:25.237 "data_offset": 2048, 00:13:25.237 "data_size": 63488 00:13:25.237 }, 00:13:25.237 { 00:13:25.237 "name": "BaseBdev4", 00:13:25.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.237 "is_configured": false, 00:13:25.237 "data_offset": 0, 00:13:25.237 "data_size": 0 00:13:25.237 } 00:13:25.237 ] 00:13:25.237 }' 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.237 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.805 [2024-10-11 09:47:10.228835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.805 [2024-10-11 09:47:10.229127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:25.805 [2024-10-11 09:47:10.229145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.805 [2024-10-11 09:47:10.229434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:25.805 [2024-10-11 09:47:10.229607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:25.805 [2024-10-11 09:47:10.229622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:25.805 [2024-10-11 09:47:10.229788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.805 BaseBdev4 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:25.805 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.806 [ 00:13:25.806 { 00:13:25.806 "name": "BaseBdev4", 00:13:25.806 "aliases": [ 00:13:25.806 "c5a51461-078a-4f00-9be3-a0fa7719a859" 00:13:25.806 ], 00:13:25.806 "product_name": "Malloc disk", 00:13:25.806 "block_size": 512, 00:13:25.806 "num_blocks": 65536, 00:13:25.806 "uuid": "c5a51461-078a-4f00-9be3-a0fa7719a859", 00:13:25.806 "assigned_rate_limits": { 00:13:25.806 "rw_ios_per_sec": 0, 00:13:25.806 "rw_mbytes_per_sec": 0, 00:13:25.806 "r_mbytes_per_sec": 0, 00:13:25.806 "w_mbytes_per_sec": 0 00:13:25.806 }, 00:13:25.806 "claimed": true, 00:13:25.806 "claim_type": "exclusive_write", 00:13:25.806 "zoned": false, 00:13:25.806 "supported_io_types": { 00:13:25.806 "read": true, 00:13:25.806 "write": true, 00:13:25.806 "unmap": true, 00:13:25.806 "flush": true, 00:13:25.806 "reset": true, 00:13:25.806 "nvme_admin": false, 00:13:25.806 "nvme_io": false, 00:13:25.806 "nvme_io_md": false, 00:13:25.806 "write_zeroes": true, 00:13:25.806 "zcopy": true, 00:13:25.806 "get_zone_info": false, 00:13:25.806 "zone_management": false, 00:13:25.806 "zone_append": false, 00:13:25.806 "compare": false, 00:13:25.806 "compare_and_write": false, 00:13:25.806 "abort": true, 00:13:25.806 "seek_hole": false, 00:13:25.806 "seek_data": false, 00:13:25.806 "copy": true, 00:13:25.806 "nvme_iov_md": false 00:13:25.806 }, 00:13:25.806 "memory_domains": [ 00:13:25.806 { 00:13:25.806 "dma_device_id": "system", 00:13:25.806 "dma_device_type": 1 00:13:25.806 }, 00:13:25.806 { 00:13:25.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.806 "dma_device_type": 2 00:13:25.806 } 00:13:25.806 ], 00:13:25.806 "driver_specific": {} 00:13:25.806 } 00:13:25.806 ] 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.806 "name": "Existed_Raid", 00:13:25.806 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:25.806 "strip_size_kb": 0, 00:13:25.806 "state": "online", 00:13:25.806 "raid_level": "raid1", 00:13:25.806 "superblock": true, 00:13:25.806 "num_base_bdevs": 4, 00:13:25.806 "num_base_bdevs_discovered": 4, 00:13:25.806 "num_base_bdevs_operational": 4, 00:13:25.806 "base_bdevs_list": [ 00:13:25.806 { 00:13:25.806 "name": "BaseBdev1", 00:13:25.806 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:25.806 "is_configured": true, 00:13:25.806 "data_offset": 2048, 00:13:25.806 "data_size": 63488 00:13:25.806 }, 00:13:25.806 { 00:13:25.806 "name": "BaseBdev2", 00:13:25.806 "uuid": "399fc81c-ad34-4da6-9b58-a4cbea648575", 00:13:25.806 "is_configured": true, 00:13:25.806 "data_offset": 2048, 00:13:25.806 "data_size": 63488 00:13:25.806 }, 00:13:25.806 { 00:13:25.806 "name": "BaseBdev3", 00:13:25.806 "uuid": "09f18a71-a65c-4c5d-bcc3-e0c8b5170567", 00:13:25.806 "is_configured": true, 00:13:25.806 "data_offset": 2048, 00:13:25.806 "data_size": 63488 00:13:25.806 }, 00:13:25.806 { 00:13:25.806 "name": "BaseBdev4", 00:13:25.806 "uuid": "c5a51461-078a-4f00-9be3-a0fa7719a859", 00:13:25.806 "is_configured": true, 00:13:25.806 "data_offset": 2048, 00:13:25.806 "data_size": 63488 00:13:25.806 } 00:13:25.806 ] 00:13:25.806 }' 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.806 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.374 [2024-10-11 09:47:10.720516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.374 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.374 "name": "Existed_Raid", 00:13:26.374 "aliases": [ 00:13:26.374 "d19bceaf-d584-4507-a901-edff8cdfcdbf" 00:13:26.374 ], 00:13:26.374 "product_name": "Raid Volume", 00:13:26.374 "block_size": 512, 00:13:26.374 "num_blocks": 63488, 00:13:26.374 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:26.374 "assigned_rate_limits": { 00:13:26.374 "rw_ios_per_sec": 0, 00:13:26.374 "rw_mbytes_per_sec": 0, 00:13:26.374 "r_mbytes_per_sec": 0, 00:13:26.374 "w_mbytes_per_sec": 0 00:13:26.374 }, 00:13:26.374 "claimed": false, 00:13:26.374 "zoned": false, 00:13:26.374 "supported_io_types": { 00:13:26.374 "read": true, 00:13:26.374 "write": true, 00:13:26.374 "unmap": false, 00:13:26.374 "flush": false, 00:13:26.374 "reset": true, 00:13:26.374 "nvme_admin": false, 00:13:26.374 "nvme_io": false, 00:13:26.374 "nvme_io_md": false, 00:13:26.374 "write_zeroes": true, 00:13:26.374 "zcopy": false, 00:13:26.374 "get_zone_info": false, 00:13:26.374 "zone_management": false, 00:13:26.374 "zone_append": false, 00:13:26.374 "compare": false, 00:13:26.374 "compare_and_write": false, 00:13:26.374 "abort": false, 00:13:26.374 "seek_hole": false, 00:13:26.374 "seek_data": false, 00:13:26.374 "copy": false, 00:13:26.374 "nvme_iov_md": false 00:13:26.374 }, 00:13:26.374 "memory_domains": [ 00:13:26.374 { 00:13:26.374 "dma_device_id": "system", 00:13:26.374 "dma_device_type": 1 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.374 "dma_device_type": 2 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "system", 00:13:26.374 "dma_device_type": 1 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.374 "dma_device_type": 2 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "system", 00:13:26.374 "dma_device_type": 1 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.374 "dma_device_type": 2 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "system", 00:13:26.374 "dma_device_type": 1 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.374 "dma_device_type": 2 00:13:26.374 } 00:13:26.374 ], 00:13:26.374 "driver_specific": { 00:13:26.374 "raid": { 00:13:26.374 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:26.374 "strip_size_kb": 0, 00:13:26.374 "state": "online", 00:13:26.374 "raid_level": "raid1", 00:13:26.374 "superblock": true, 00:13:26.374 "num_base_bdevs": 4, 00:13:26.374 "num_base_bdevs_discovered": 4, 00:13:26.374 "num_base_bdevs_operational": 4, 00:13:26.374 "base_bdevs_list": [ 00:13:26.374 { 00:13:26.374 "name": "BaseBdev1", 00:13:26.374 "uuid": "b52856cd-0161-45a8-9612-d47d07a094cf", 00:13:26.374 "is_configured": true, 00:13:26.374 "data_offset": 2048, 00:13:26.374 "data_size": 63488 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "name": "BaseBdev2", 00:13:26.374 "uuid": "399fc81c-ad34-4da6-9b58-a4cbea648575", 00:13:26.374 "is_configured": true, 00:13:26.374 "data_offset": 2048, 00:13:26.374 "data_size": 63488 00:13:26.374 }, 00:13:26.374 { 00:13:26.374 "name": "BaseBdev3", 00:13:26.374 "uuid": "09f18a71-a65c-4c5d-bcc3-e0c8b5170567", 00:13:26.375 "is_configured": true, 00:13:26.375 "data_offset": 2048, 00:13:26.375 "data_size": 63488 00:13:26.375 }, 00:13:26.375 { 00:13:26.375 "name": "BaseBdev4", 00:13:26.375 "uuid": "c5a51461-078a-4f00-9be3-a0fa7719a859", 00:13:26.375 "is_configured": true, 00:13:26.375 "data_offset": 2048, 00:13:26.375 "data_size": 63488 00:13:26.375 } 00:13:26.375 ] 00:13:26.375 } 00:13:26.375 } 00:13:26.375 }' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:26.375 BaseBdev2 00:13:26.375 BaseBdev3 00:13:26.375 BaseBdev4' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.375 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.634 [2024-10-11 09:47:11.071674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.634 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.635 "name": "Existed_Raid", 00:13:26.635 "uuid": "d19bceaf-d584-4507-a901-edff8cdfcdbf", 00:13:26.635 "strip_size_kb": 0, 00:13:26.635 "state": "online", 00:13:26.635 "raid_level": "raid1", 00:13:26.635 "superblock": true, 00:13:26.635 "num_base_bdevs": 4, 00:13:26.635 "num_base_bdevs_discovered": 3, 00:13:26.635 "num_base_bdevs_operational": 3, 00:13:26.635 "base_bdevs_list": [ 00:13:26.635 { 00:13:26.635 "name": null, 00:13:26.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.635 "is_configured": false, 00:13:26.635 "data_offset": 0, 00:13:26.635 "data_size": 63488 00:13:26.635 }, 00:13:26.635 { 00:13:26.635 "name": "BaseBdev2", 00:13:26.635 "uuid": "399fc81c-ad34-4da6-9b58-a4cbea648575", 00:13:26.635 "is_configured": true, 00:13:26.635 "data_offset": 2048, 00:13:26.635 "data_size": 63488 00:13:26.635 }, 00:13:26.635 { 00:13:26.635 "name": "BaseBdev3", 00:13:26.635 "uuid": "09f18a71-a65c-4c5d-bcc3-e0c8b5170567", 00:13:26.635 "is_configured": true, 00:13:26.635 "data_offset": 2048, 00:13:26.635 "data_size": 63488 00:13:26.635 }, 00:13:26.635 { 00:13:26.635 "name": "BaseBdev4", 00:13:26.635 "uuid": "c5a51461-078a-4f00-9be3-a0fa7719a859", 00:13:26.635 "is_configured": true, 00:13:26.635 "data_offset": 2048, 00:13:26.635 "data_size": 63488 00:13:26.635 } 00:13:26.635 ] 00:13:26.635 }' 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.635 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.203 [2024-10-11 09:47:11.715969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.203 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.462 [2024-10-11 09:47:11.874687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.462 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.462 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.462 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:27.462 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:27.462 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:27.462 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.462 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.462 [2024-10-11 09:47:12.040858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:27.462 [2024-10-11 09:47:12.040994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.722 [2024-10-11 09:47:12.143081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.722 [2024-10-11 09:47:12.143159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.722 [2024-10-11 09:47:12.143187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 BaseBdev2 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 [ 00:13:27.722 { 00:13:27.722 "name": "BaseBdev2", 00:13:27.722 "aliases": [ 00:13:27.722 "fed9332b-653e-44da-99c8-363d4165b16c" 00:13:27.722 ], 00:13:27.722 "product_name": "Malloc disk", 00:13:27.722 "block_size": 512, 00:13:27.722 "num_blocks": 65536, 00:13:27.722 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:27.722 "assigned_rate_limits": { 00:13:27.722 "rw_ios_per_sec": 0, 00:13:27.722 "rw_mbytes_per_sec": 0, 00:13:27.722 "r_mbytes_per_sec": 0, 00:13:27.722 "w_mbytes_per_sec": 0 00:13:27.722 }, 00:13:27.722 "claimed": false, 00:13:27.722 "zoned": false, 00:13:27.722 "supported_io_types": { 00:13:27.722 "read": true, 00:13:27.722 "write": true, 00:13:27.722 "unmap": true, 00:13:27.722 "flush": true, 00:13:27.722 "reset": true, 00:13:27.722 "nvme_admin": false, 00:13:27.722 "nvme_io": false, 00:13:27.722 "nvme_io_md": false, 00:13:27.722 "write_zeroes": true, 00:13:27.722 "zcopy": true, 00:13:27.722 "get_zone_info": false, 00:13:27.722 "zone_management": false, 00:13:27.722 "zone_append": false, 00:13:27.722 "compare": false, 00:13:27.722 "compare_and_write": false, 00:13:27.722 "abort": true, 00:13:27.722 "seek_hole": false, 00:13:27.722 "seek_data": false, 00:13:27.722 "copy": true, 00:13:27.722 "nvme_iov_md": false 00:13:27.722 }, 00:13:27.722 "memory_domains": [ 00:13:27.722 { 00:13:27.722 "dma_device_id": "system", 00:13:27.722 "dma_device_type": 1 00:13:27.722 }, 00:13:27.722 { 00:13:27.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.722 "dma_device_type": 2 00:13:27.722 } 00:13:27.722 ], 00:13:27.722 "driver_specific": {} 00:13:27.722 } 00:13:27.722 ] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 BaseBdev3 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.722 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.723 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.982 [ 00:13:27.982 { 00:13:27.982 "name": "BaseBdev3", 00:13:27.982 "aliases": [ 00:13:27.982 "296aba22-da76-4929-b720-529c782fb023" 00:13:27.982 ], 00:13:27.982 "product_name": "Malloc disk", 00:13:27.982 "block_size": 512, 00:13:27.982 "num_blocks": 65536, 00:13:27.982 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:27.982 "assigned_rate_limits": { 00:13:27.982 "rw_ios_per_sec": 0, 00:13:27.982 "rw_mbytes_per_sec": 0, 00:13:27.982 "r_mbytes_per_sec": 0, 00:13:27.982 "w_mbytes_per_sec": 0 00:13:27.982 }, 00:13:27.982 "claimed": false, 00:13:27.982 "zoned": false, 00:13:27.982 "supported_io_types": { 00:13:27.982 "read": true, 00:13:27.982 "write": true, 00:13:27.982 "unmap": true, 00:13:27.983 "flush": true, 00:13:27.983 "reset": true, 00:13:27.983 "nvme_admin": false, 00:13:27.983 "nvme_io": false, 00:13:27.983 "nvme_io_md": false, 00:13:27.983 "write_zeroes": true, 00:13:27.983 "zcopy": true, 00:13:27.983 "get_zone_info": false, 00:13:27.983 "zone_management": false, 00:13:27.983 "zone_append": false, 00:13:27.983 "compare": false, 00:13:27.983 "compare_and_write": false, 00:13:27.983 "abort": true, 00:13:27.983 "seek_hole": false, 00:13:27.983 "seek_data": false, 00:13:27.983 "copy": true, 00:13:27.983 "nvme_iov_md": false 00:13:27.983 }, 00:13:27.983 "memory_domains": [ 00:13:27.983 { 00:13:27.983 "dma_device_id": "system", 00:13:27.983 "dma_device_type": 1 00:13:27.983 }, 00:13:27.983 { 00:13:27.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.983 "dma_device_type": 2 00:13:27.983 } 00:13:27.983 ], 00:13:27.983 "driver_specific": {} 00:13:27.983 } 00:13:27.983 ] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 BaseBdev4 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 [ 00:13:27.983 { 00:13:27.983 "name": "BaseBdev4", 00:13:27.983 "aliases": [ 00:13:27.983 "f4d88c9d-36a4-45e7-be02-e59efb46d70b" 00:13:27.983 ], 00:13:27.983 "product_name": "Malloc disk", 00:13:27.983 "block_size": 512, 00:13:27.983 "num_blocks": 65536, 00:13:27.983 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:27.983 "assigned_rate_limits": { 00:13:27.983 "rw_ios_per_sec": 0, 00:13:27.983 "rw_mbytes_per_sec": 0, 00:13:27.983 "r_mbytes_per_sec": 0, 00:13:27.983 "w_mbytes_per_sec": 0 00:13:27.983 }, 00:13:27.983 "claimed": false, 00:13:27.983 "zoned": false, 00:13:27.983 "supported_io_types": { 00:13:27.983 "read": true, 00:13:27.983 "write": true, 00:13:27.983 "unmap": true, 00:13:27.983 "flush": true, 00:13:27.983 "reset": true, 00:13:27.983 "nvme_admin": false, 00:13:27.983 "nvme_io": false, 00:13:27.983 "nvme_io_md": false, 00:13:27.983 "write_zeroes": true, 00:13:27.983 "zcopy": true, 00:13:27.983 "get_zone_info": false, 00:13:27.983 "zone_management": false, 00:13:27.983 "zone_append": false, 00:13:27.983 "compare": false, 00:13:27.983 "compare_and_write": false, 00:13:27.983 "abort": true, 00:13:27.983 "seek_hole": false, 00:13:27.983 "seek_data": false, 00:13:27.983 "copy": true, 00:13:27.983 "nvme_iov_md": false 00:13:27.983 }, 00:13:27.983 "memory_domains": [ 00:13:27.983 { 00:13:27.983 "dma_device_id": "system", 00:13:27.983 "dma_device_type": 1 00:13:27.983 }, 00:13:27.983 { 00:13:27.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.983 "dma_device_type": 2 00:13:27.983 } 00:13:27.983 ], 00:13:27.983 "driver_specific": {} 00:13:27.983 } 00:13:27.983 ] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 [2024-10-11 09:47:12.462526] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.983 [2024-10-11 09:47:12.462587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.983 [2024-10-11 09:47:12.462611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.983 [2024-10-11 09:47:12.464728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.983 [2024-10-11 09:47:12.464799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.983 "name": "Existed_Raid", 00:13:27.983 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:27.983 "strip_size_kb": 0, 00:13:27.983 "state": "configuring", 00:13:27.983 "raid_level": "raid1", 00:13:27.983 "superblock": true, 00:13:27.983 "num_base_bdevs": 4, 00:13:27.983 "num_base_bdevs_discovered": 3, 00:13:27.983 "num_base_bdevs_operational": 4, 00:13:27.983 "base_bdevs_list": [ 00:13:27.983 { 00:13:27.983 "name": "BaseBdev1", 00:13:27.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.983 "is_configured": false, 00:13:27.983 "data_offset": 0, 00:13:27.983 "data_size": 0 00:13:27.983 }, 00:13:27.983 { 00:13:27.983 "name": "BaseBdev2", 00:13:27.983 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:27.983 "is_configured": true, 00:13:27.983 "data_offset": 2048, 00:13:27.983 "data_size": 63488 00:13:27.983 }, 00:13:27.983 { 00:13:27.983 "name": "BaseBdev3", 00:13:27.983 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:27.983 "is_configured": true, 00:13:27.983 "data_offset": 2048, 00:13:27.983 "data_size": 63488 00:13:27.983 }, 00:13:27.983 { 00:13:27.983 "name": "BaseBdev4", 00:13:27.983 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:27.983 "is_configured": true, 00:13:27.983 "data_offset": 2048, 00:13:27.983 "data_size": 63488 00:13:27.983 } 00:13:27.983 ] 00:13:27.983 }' 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.983 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.552 [2024-10-11 09:47:12.901834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.552 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.552 "name": "Existed_Raid", 00:13:28.552 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:28.552 "strip_size_kb": 0, 00:13:28.552 "state": "configuring", 00:13:28.552 "raid_level": "raid1", 00:13:28.552 "superblock": true, 00:13:28.552 "num_base_bdevs": 4, 00:13:28.552 "num_base_bdevs_discovered": 2, 00:13:28.552 "num_base_bdevs_operational": 4, 00:13:28.552 "base_bdevs_list": [ 00:13:28.552 { 00:13:28.552 "name": "BaseBdev1", 00:13:28.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.553 "is_configured": false, 00:13:28.553 "data_offset": 0, 00:13:28.553 "data_size": 0 00:13:28.553 }, 00:13:28.553 { 00:13:28.553 "name": null, 00:13:28.553 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:28.553 "is_configured": false, 00:13:28.553 "data_offset": 0, 00:13:28.553 "data_size": 63488 00:13:28.553 }, 00:13:28.553 { 00:13:28.553 "name": "BaseBdev3", 00:13:28.553 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:28.553 "is_configured": true, 00:13:28.553 "data_offset": 2048, 00:13:28.553 "data_size": 63488 00:13:28.553 }, 00:13:28.553 { 00:13:28.553 "name": "BaseBdev4", 00:13:28.553 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:28.553 "is_configured": true, 00:13:28.553 "data_offset": 2048, 00:13:28.553 "data_size": 63488 00:13:28.553 } 00:13:28.553 ] 00:13:28.553 }' 00:13:28.553 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.553 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.814 [2024-10-11 09:47:13.419340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.814 BaseBdev1 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.814 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.815 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.815 [ 00:13:28.815 { 00:13:28.815 "name": "BaseBdev1", 00:13:28.815 "aliases": [ 00:13:28.815 "b974f411-5a25-4308-8acd-d8268620d25d" 00:13:28.815 ], 00:13:28.815 "product_name": "Malloc disk", 00:13:29.074 "block_size": 512, 00:13:29.074 "num_blocks": 65536, 00:13:29.074 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:29.074 "assigned_rate_limits": { 00:13:29.074 "rw_ios_per_sec": 0, 00:13:29.074 "rw_mbytes_per_sec": 0, 00:13:29.074 "r_mbytes_per_sec": 0, 00:13:29.074 "w_mbytes_per_sec": 0 00:13:29.074 }, 00:13:29.074 "claimed": true, 00:13:29.074 "claim_type": "exclusive_write", 00:13:29.074 "zoned": false, 00:13:29.074 "supported_io_types": { 00:13:29.074 "read": true, 00:13:29.074 "write": true, 00:13:29.074 "unmap": true, 00:13:29.074 "flush": true, 00:13:29.074 "reset": true, 00:13:29.074 "nvme_admin": false, 00:13:29.074 "nvme_io": false, 00:13:29.074 "nvme_io_md": false, 00:13:29.074 "write_zeroes": true, 00:13:29.074 "zcopy": true, 00:13:29.074 "get_zone_info": false, 00:13:29.074 "zone_management": false, 00:13:29.074 "zone_append": false, 00:13:29.074 "compare": false, 00:13:29.074 "compare_and_write": false, 00:13:29.074 "abort": true, 00:13:29.074 "seek_hole": false, 00:13:29.074 "seek_data": false, 00:13:29.074 "copy": true, 00:13:29.074 "nvme_iov_md": false 00:13:29.074 }, 00:13:29.074 "memory_domains": [ 00:13:29.074 { 00:13:29.074 "dma_device_id": "system", 00:13:29.074 "dma_device_type": 1 00:13:29.074 }, 00:13:29.074 { 00:13:29.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.074 "dma_device_type": 2 00:13:29.074 } 00:13:29.074 ], 00:13:29.074 "driver_specific": {} 00:13:29.074 } 00:13:29.074 ] 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.074 "name": "Existed_Raid", 00:13:29.074 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:29.074 "strip_size_kb": 0, 00:13:29.074 "state": "configuring", 00:13:29.074 "raid_level": "raid1", 00:13:29.074 "superblock": true, 00:13:29.074 "num_base_bdevs": 4, 00:13:29.074 "num_base_bdevs_discovered": 3, 00:13:29.074 "num_base_bdevs_operational": 4, 00:13:29.074 "base_bdevs_list": [ 00:13:29.074 { 00:13:29.074 "name": "BaseBdev1", 00:13:29.074 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:29.074 "is_configured": true, 00:13:29.074 "data_offset": 2048, 00:13:29.074 "data_size": 63488 00:13:29.074 }, 00:13:29.074 { 00:13:29.074 "name": null, 00:13:29.074 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:29.074 "is_configured": false, 00:13:29.074 "data_offset": 0, 00:13:29.074 "data_size": 63488 00:13:29.074 }, 00:13:29.074 { 00:13:29.074 "name": "BaseBdev3", 00:13:29.074 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:29.074 "is_configured": true, 00:13:29.074 "data_offset": 2048, 00:13:29.074 "data_size": 63488 00:13:29.074 }, 00:13:29.074 { 00:13:29.074 "name": "BaseBdev4", 00:13:29.074 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:29.074 "is_configured": true, 00:13:29.074 "data_offset": 2048, 00:13:29.074 "data_size": 63488 00:13:29.074 } 00:13:29.074 ] 00:13:29.074 }' 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.074 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.332 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.332 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.332 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.332 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.332 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.590 [2024-10-11 09:47:13.970580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.590 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.590 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.590 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.590 "name": "Existed_Raid", 00:13:29.590 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:29.590 "strip_size_kb": 0, 00:13:29.590 "state": "configuring", 00:13:29.590 "raid_level": "raid1", 00:13:29.590 "superblock": true, 00:13:29.590 "num_base_bdevs": 4, 00:13:29.590 "num_base_bdevs_discovered": 2, 00:13:29.590 "num_base_bdevs_operational": 4, 00:13:29.591 "base_bdevs_list": [ 00:13:29.591 { 00:13:29.591 "name": "BaseBdev1", 00:13:29.591 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:29.591 "is_configured": true, 00:13:29.591 "data_offset": 2048, 00:13:29.591 "data_size": 63488 00:13:29.591 }, 00:13:29.591 { 00:13:29.591 "name": null, 00:13:29.591 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:29.591 "is_configured": false, 00:13:29.591 "data_offset": 0, 00:13:29.591 "data_size": 63488 00:13:29.591 }, 00:13:29.591 { 00:13:29.591 "name": null, 00:13:29.591 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:29.591 "is_configured": false, 00:13:29.591 "data_offset": 0, 00:13:29.591 "data_size": 63488 00:13:29.591 }, 00:13:29.591 { 00:13:29.591 "name": "BaseBdev4", 00:13:29.591 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:29.591 "is_configured": true, 00:13:29.591 "data_offset": 2048, 00:13:29.591 "data_size": 63488 00:13:29.591 } 00:13:29.591 ] 00:13:29.591 }' 00:13:29.591 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.591 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 [2024-10-11 09:47:14.429806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.849 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.109 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.109 "name": "Existed_Raid", 00:13:30.109 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:30.109 "strip_size_kb": 0, 00:13:30.109 "state": "configuring", 00:13:30.109 "raid_level": "raid1", 00:13:30.109 "superblock": true, 00:13:30.109 "num_base_bdevs": 4, 00:13:30.109 "num_base_bdevs_discovered": 3, 00:13:30.109 "num_base_bdevs_operational": 4, 00:13:30.109 "base_bdevs_list": [ 00:13:30.109 { 00:13:30.109 "name": "BaseBdev1", 00:13:30.109 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:30.109 "is_configured": true, 00:13:30.109 "data_offset": 2048, 00:13:30.109 "data_size": 63488 00:13:30.109 }, 00:13:30.109 { 00:13:30.109 "name": null, 00:13:30.109 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:30.109 "is_configured": false, 00:13:30.109 "data_offset": 0, 00:13:30.109 "data_size": 63488 00:13:30.109 }, 00:13:30.109 { 00:13:30.109 "name": "BaseBdev3", 00:13:30.109 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:30.109 "is_configured": true, 00:13:30.109 "data_offset": 2048, 00:13:30.109 "data_size": 63488 00:13:30.109 }, 00:13:30.109 { 00:13:30.109 "name": "BaseBdev4", 00:13:30.109 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:30.109 "is_configured": true, 00:13:30.109 "data_offset": 2048, 00:13:30.109 "data_size": 63488 00:13:30.109 } 00:13:30.109 ] 00:13:30.109 }' 00:13:30.109 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.109 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.367 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.367 [2024-10-11 09:47:14.925001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.625 "name": "Existed_Raid", 00:13:30.625 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:30.625 "strip_size_kb": 0, 00:13:30.625 "state": "configuring", 00:13:30.625 "raid_level": "raid1", 00:13:30.625 "superblock": true, 00:13:30.625 "num_base_bdevs": 4, 00:13:30.625 "num_base_bdevs_discovered": 2, 00:13:30.625 "num_base_bdevs_operational": 4, 00:13:30.625 "base_bdevs_list": [ 00:13:30.625 { 00:13:30.625 "name": null, 00:13:30.625 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:30.625 "is_configured": false, 00:13:30.625 "data_offset": 0, 00:13:30.625 "data_size": 63488 00:13:30.625 }, 00:13:30.625 { 00:13:30.625 "name": null, 00:13:30.625 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:30.625 "is_configured": false, 00:13:30.625 "data_offset": 0, 00:13:30.625 "data_size": 63488 00:13:30.625 }, 00:13:30.625 { 00:13:30.625 "name": "BaseBdev3", 00:13:30.625 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:30.625 "is_configured": true, 00:13:30.625 "data_offset": 2048, 00:13:30.625 "data_size": 63488 00:13:30.625 }, 00:13:30.625 { 00:13:30.625 "name": "BaseBdev4", 00:13:30.625 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:30.625 "is_configured": true, 00:13:30.625 "data_offset": 2048, 00:13:30.625 "data_size": 63488 00:13:30.625 } 00:13:30.625 ] 00:13:30.625 }' 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.625 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.883 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:30.883 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.883 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.883 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.883 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.141 [2024-10-11 09:47:15.536278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.141 "name": "Existed_Raid", 00:13:31.141 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:31.141 "strip_size_kb": 0, 00:13:31.141 "state": "configuring", 00:13:31.141 "raid_level": "raid1", 00:13:31.141 "superblock": true, 00:13:31.141 "num_base_bdevs": 4, 00:13:31.141 "num_base_bdevs_discovered": 3, 00:13:31.141 "num_base_bdevs_operational": 4, 00:13:31.141 "base_bdevs_list": [ 00:13:31.141 { 00:13:31.141 "name": null, 00:13:31.141 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:31.141 "is_configured": false, 00:13:31.141 "data_offset": 0, 00:13:31.141 "data_size": 63488 00:13:31.141 }, 00:13:31.141 { 00:13:31.141 "name": "BaseBdev2", 00:13:31.141 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:31.141 "is_configured": true, 00:13:31.141 "data_offset": 2048, 00:13:31.141 "data_size": 63488 00:13:31.141 }, 00:13:31.141 { 00:13:31.141 "name": "BaseBdev3", 00:13:31.141 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:31.141 "is_configured": true, 00:13:31.141 "data_offset": 2048, 00:13:31.141 "data_size": 63488 00:13:31.141 }, 00:13:31.141 { 00:13:31.141 "name": "BaseBdev4", 00:13:31.141 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:31.141 "is_configured": true, 00:13:31.141 "data_offset": 2048, 00:13:31.141 "data_size": 63488 00:13:31.141 } 00:13:31.141 ] 00:13:31.141 }' 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.141 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.399 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b974f411-5a25-4308-8acd-d8268620d25d 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.657 [2024-10-11 09:47:16.149440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:31.657 [2024-10-11 09:47:16.149725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:31.657 [2024-10-11 09:47:16.149761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.657 [2024-10-11 09:47:16.150079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:31.657 [2024-10-11 09:47:16.150304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:31.657 [2024-10-11 09:47:16.150316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:31.657 [2024-10-11 09:47:16.150475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.657 NewBaseBdev 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.657 [ 00:13:31.657 { 00:13:31.657 "name": "NewBaseBdev", 00:13:31.657 "aliases": [ 00:13:31.657 "b974f411-5a25-4308-8acd-d8268620d25d" 00:13:31.657 ], 00:13:31.657 "product_name": "Malloc disk", 00:13:31.657 "block_size": 512, 00:13:31.657 "num_blocks": 65536, 00:13:31.657 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:31.657 "assigned_rate_limits": { 00:13:31.657 "rw_ios_per_sec": 0, 00:13:31.657 "rw_mbytes_per_sec": 0, 00:13:31.657 "r_mbytes_per_sec": 0, 00:13:31.657 "w_mbytes_per_sec": 0 00:13:31.657 }, 00:13:31.657 "claimed": true, 00:13:31.657 "claim_type": "exclusive_write", 00:13:31.657 "zoned": false, 00:13:31.657 "supported_io_types": { 00:13:31.657 "read": true, 00:13:31.657 "write": true, 00:13:31.657 "unmap": true, 00:13:31.657 "flush": true, 00:13:31.657 "reset": true, 00:13:31.657 "nvme_admin": false, 00:13:31.657 "nvme_io": false, 00:13:31.657 "nvme_io_md": false, 00:13:31.657 "write_zeroes": true, 00:13:31.657 "zcopy": true, 00:13:31.657 "get_zone_info": false, 00:13:31.657 "zone_management": false, 00:13:31.657 "zone_append": false, 00:13:31.657 "compare": false, 00:13:31.657 "compare_and_write": false, 00:13:31.657 "abort": true, 00:13:31.657 "seek_hole": false, 00:13:31.657 "seek_data": false, 00:13:31.657 "copy": true, 00:13:31.657 "nvme_iov_md": false 00:13:31.657 }, 00:13:31.657 "memory_domains": [ 00:13:31.657 { 00:13:31.657 "dma_device_id": "system", 00:13:31.657 "dma_device_type": 1 00:13:31.657 }, 00:13:31.657 { 00:13:31.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.657 "dma_device_type": 2 00:13:31.657 } 00:13:31.657 ], 00:13:31.657 "driver_specific": {} 00:13:31.657 } 00:13:31.657 ] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.657 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.658 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.658 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.658 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.658 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.658 "name": "Existed_Raid", 00:13:31.658 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:31.658 "strip_size_kb": 0, 00:13:31.658 "state": "online", 00:13:31.658 "raid_level": "raid1", 00:13:31.658 "superblock": true, 00:13:31.658 "num_base_bdevs": 4, 00:13:31.658 "num_base_bdevs_discovered": 4, 00:13:31.658 "num_base_bdevs_operational": 4, 00:13:31.658 "base_bdevs_list": [ 00:13:31.658 { 00:13:31.658 "name": "NewBaseBdev", 00:13:31.658 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:31.658 "is_configured": true, 00:13:31.658 "data_offset": 2048, 00:13:31.658 "data_size": 63488 00:13:31.658 }, 00:13:31.658 { 00:13:31.658 "name": "BaseBdev2", 00:13:31.658 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:31.658 "is_configured": true, 00:13:31.658 "data_offset": 2048, 00:13:31.658 "data_size": 63488 00:13:31.658 }, 00:13:31.658 { 00:13:31.658 "name": "BaseBdev3", 00:13:31.658 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:31.658 "is_configured": true, 00:13:31.658 "data_offset": 2048, 00:13:31.658 "data_size": 63488 00:13:31.658 }, 00:13:31.658 { 00:13:31.658 "name": "BaseBdev4", 00:13:31.658 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:31.658 "is_configured": true, 00:13:31.658 "data_offset": 2048, 00:13:31.658 "data_size": 63488 00:13:31.658 } 00:13:31.658 ] 00:13:31.658 }' 00:13:31.658 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.658 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.225 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:32.225 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.226 [2024-10-11 09:47:16.645089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:32.226 "name": "Existed_Raid", 00:13:32.226 "aliases": [ 00:13:32.226 "680d98aa-c27b-4df3-9b5c-ec39fc25cfee" 00:13:32.226 ], 00:13:32.226 "product_name": "Raid Volume", 00:13:32.226 "block_size": 512, 00:13:32.226 "num_blocks": 63488, 00:13:32.226 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:32.226 "assigned_rate_limits": { 00:13:32.226 "rw_ios_per_sec": 0, 00:13:32.226 "rw_mbytes_per_sec": 0, 00:13:32.226 "r_mbytes_per_sec": 0, 00:13:32.226 "w_mbytes_per_sec": 0 00:13:32.226 }, 00:13:32.226 "claimed": false, 00:13:32.226 "zoned": false, 00:13:32.226 "supported_io_types": { 00:13:32.226 "read": true, 00:13:32.226 "write": true, 00:13:32.226 "unmap": false, 00:13:32.226 "flush": false, 00:13:32.226 "reset": true, 00:13:32.226 "nvme_admin": false, 00:13:32.226 "nvme_io": false, 00:13:32.226 "nvme_io_md": false, 00:13:32.226 "write_zeroes": true, 00:13:32.226 "zcopy": false, 00:13:32.226 "get_zone_info": false, 00:13:32.226 "zone_management": false, 00:13:32.226 "zone_append": false, 00:13:32.226 "compare": false, 00:13:32.226 "compare_and_write": false, 00:13:32.226 "abort": false, 00:13:32.226 "seek_hole": false, 00:13:32.226 "seek_data": false, 00:13:32.226 "copy": false, 00:13:32.226 "nvme_iov_md": false 00:13:32.226 }, 00:13:32.226 "memory_domains": [ 00:13:32.226 { 00:13:32.226 "dma_device_id": "system", 00:13:32.226 "dma_device_type": 1 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.226 "dma_device_type": 2 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "system", 00:13:32.226 "dma_device_type": 1 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.226 "dma_device_type": 2 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "system", 00:13:32.226 "dma_device_type": 1 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.226 "dma_device_type": 2 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "system", 00:13:32.226 "dma_device_type": 1 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.226 "dma_device_type": 2 00:13:32.226 } 00:13:32.226 ], 00:13:32.226 "driver_specific": { 00:13:32.226 "raid": { 00:13:32.226 "uuid": "680d98aa-c27b-4df3-9b5c-ec39fc25cfee", 00:13:32.226 "strip_size_kb": 0, 00:13:32.226 "state": "online", 00:13:32.226 "raid_level": "raid1", 00:13:32.226 "superblock": true, 00:13:32.226 "num_base_bdevs": 4, 00:13:32.226 "num_base_bdevs_discovered": 4, 00:13:32.226 "num_base_bdevs_operational": 4, 00:13:32.226 "base_bdevs_list": [ 00:13:32.226 { 00:13:32.226 "name": "NewBaseBdev", 00:13:32.226 "uuid": "b974f411-5a25-4308-8acd-d8268620d25d", 00:13:32.226 "is_configured": true, 00:13:32.226 "data_offset": 2048, 00:13:32.226 "data_size": 63488 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "name": "BaseBdev2", 00:13:32.226 "uuid": "fed9332b-653e-44da-99c8-363d4165b16c", 00:13:32.226 "is_configured": true, 00:13:32.226 "data_offset": 2048, 00:13:32.226 "data_size": 63488 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "name": "BaseBdev3", 00:13:32.226 "uuid": "296aba22-da76-4929-b720-529c782fb023", 00:13:32.226 "is_configured": true, 00:13:32.226 "data_offset": 2048, 00:13:32.226 "data_size": 63488 00:13:32.226 }, 00:13:32.226 { 00:13:32.226 "name": "BaseBdev4", 00:13:32.226 "uuid": "f4d88c9d-36a4-45e7-be02-e59efb46d70b", 00:13:32.226 "is_configured": true, 00:13:32.226 "data_offset": 2048, 00:13:32.226 "data_size": 63488 00:13:32.226 } 00:13:32.226 ] 00:13:32.226 } 00:13:32.226 } 00:13:32.226 }' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:32.226 BaseBdev2 00:13:32.226 BaseBdev3 00:13:32.226 BaseBdev4' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.226 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.486 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.486 [2024-10-11 09:47:17.008093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.486 [2024-10-11 09:47:17.008142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.486 [2024-10-11 09:47:17.008254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.486 [2024-10-11 09:47:17.008569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.486 [2024-10-11 09:47:17.008593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74361 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74361 ']' 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74361 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74361 00:13:32.486 killing process with pid 74361 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74361' 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74361 00:13:32.486 [2024-10-11 09:47:17.058193] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.486 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74361 00:13:33.054 [2024-10-11 09:47:17.454937] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.989 ************************************ 00:13:33.989 END TEST raid_state_function_test_sb 00:13:33.989 ************************************ 00:13:33.989 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:33.989 00:13:33.989 real 0m12.072s 00:13:33.989 user 0m19.060s 00:13:33.989 sys 0m2.300s 00:13:33.989 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.989 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 09:47:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:34.249 09:47:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:34.249 09:47:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.249 09:47:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 ************************************ 00:13:34.249 START TEST raid_superblock_test 00:13:34.249 ************************************ 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75037 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75037 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75037 ']' 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.249 09:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.249 [2024-10-11 09:47:18.765468] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:34.249 [2024-10-11 09:47:18.766234] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75037 ] 00:13:34.508 [2024-10-11 09:47:18.939026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.508 [2024-10-11 09:47:19.065001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.766 [2024-10-11 09:47:19.289828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.766 [2024-10-11 09:47:19.289873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.340 malloc1 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.340 [2024-10-11 09:47:19.748163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:35.340 [2024-10-11 09:47:19.748243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.340 [2024-10-11 09:47:19.748270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.340 [2024-10-11 09:47:19.748280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.340 [2024-10-11 09:47:19.750343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.340 [2024-10-11 09:47:19.750379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:35.340 pt1 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.340 malloc2 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.340 [2024-10-11 09:47:19.807139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.340 [2024-10-11 09:47:19.807201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.340 [2024-10-11 09:47:19.807223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.340 [2024-10-11 09:47:19.807233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.340 [2024-10-11 09:47:19.809648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.340 [2024-10-11 09:47:19.809686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.340 pt2 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:35.340 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.341 malloc3 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.341 [2024-10-11 09:47:19.877289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:35.341 [2024-10-11 09:47:19.877352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.341 [2024-10-11 09:47:19.877375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.341 [2024-10-11 09:47:19.877385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.341 [2024-10-11 09:47:19.879751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.341 [2024-10-11 09:47:19.879785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:35.341 pt3 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.341 malloc4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.341 [2024-10-11 09:47:19.937067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:35.341 [2024-10-11 09:47:19.937138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.341 [2024-10-11 09:47:19.937159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:35.341 [2024-10-11 09:47:19.937168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.341 [2024-10-11 09:47:19.939694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.341 [2024-10-11 09:47:19.939728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:35.341 pt4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.341 [2024-10-11 09:47:19.949094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.341 [2024-10-11 09:47:19.951313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.341 [2024-10-11 09:47:19.951379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.341 [2024-10-11 09:47:19.951419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:35.341 [2024-10-11 09:47:19.951594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.341 [2024-10-11 09:47:19.951622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.341 [2024-10-11 09:47:19.951960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.341 [2024-10-11 09:47:19.952147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.341 [2024-10-11 09:47:19.952175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.341 [2024-10-11 09:47:19.952318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.341 09:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.617 09:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.617 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.617 "name": "raid_bdev1", 00:13:35.617 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:35.617 "strip_size_kb": 0, 00:13:35.617 "state": "online", 00:13:35.617 "raid_level": "raid1", 00:13:35.617 "superblock": true, 00:13:35.617 "num_base_bdevs": 4, 00:13:35.617 "num_base_bdevs_discovered": 4, 00:13:35.617 "num_base_bdevs_operational": 4, 00:13:35.617 "base_bdevs_list": [ 00:13:35.617 { 00:13:35.617 "name": "pt1", 00:13:35.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.617 "is_configured": true, 00:13:35.617 "data_offset": 2048, 00:13:35.617 "data_size": 63488 00:13:35.617 }, 00:13:35.617 { 00:13:35.617 "name": "pt2", 00:13:35.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.617 "is_configured": true, 00:13:35.617 "data_offset": 2048, 00:13:35.617 "data_size": 63488 00:13:35.617 }, 00:13:35.617 { 00:13:35.617 "name": "pt3", 00:13:35.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.617 "is_configured": true, 00:13:35.617 "data_offset": 2048, 00:13:35.617 "data_size": 63488 00:13:35.617 }, 00:13:35.617 { 00:13:35.617 "name": "pt4", 00:13:35.617 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.617 "is_configured": true, 00:13:35.617 "data_offset": 2048, 00:13:35.617 "data_size": 63488 00:13:35.617 } 00:13:35.617 ] 00:13:35.617 }' 00:13:35.617 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.617 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.876 [2024-10-11 09:47:20.412669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.876 "name": "raid_bdev1", 00:13:35.876 "aliases": [ 00:13:35.876 "9515bf2e-5325-4f0c-95c2-addc431af9a3" 00:13:35.876 ], 00:13:35.876 "product_name": "Raid Volume", 00:13:35.876 "block_size": 512, 00:13:35.876 "num_blocks": 63488, 00:13:35.876 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:35.876 "assigned_rate_limits": { 00:13:35.876 "rw_ios_per_sec": 0, 00:13:35.876 "rw_mbytes_per_sec": 0, 00:13:35.876 "r_mbytes_per_sec": 0, 00:13:35.876 "w_mbytes_per_sec": 0 00:13:35.876 }, 00:13:35.876 "claimed": false, 00:13:35.876 "zoned": false, 00:13:35.876 "supported_io_types": { 00:13:35.876 "read": true, 00:13:35.876 "write": true, 00:13:35.876 "unmap": false, 00:13:35.876 "flush": false, 00:13:35.876 "reset": true, 00:13:35.876 "nvme_admin": false, 00:13:35.876 "nvme_io": false, 00:13:35.876 "nvme_io_md": false, 00:13:35.876 "write_zeroes": true, 00:13:35.876 "zcopy": false, 00:13:35.876 "get_zone_info": false, 00:13:35.876 "zone_management": false, 00:13:35.876 "zone_append": false, 00:13:35.876 "compare": false, 00:13:35.876 "compare_and_write": false, 00:13:35.876 "abort": false, 00:13:35.876 "seek_hole": false, 00:13:35.876 "seek_data": false, 00:13:35.876 "copy": false, 00:13:35.876 "nvme_iov_md": false 00:13:35.876 }, 00:13:35.876 "memory_domains": [ 00:13:35.876 { 00:13:35.876 "dma_device_id": "system", 00:13:35.876 "dma_device_type": 1 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.876 "dma_device_type": 2 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "system", 00:13:35.876 "dma_device_type": 1 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.876 "dma_device_type": 2 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "system", 00:13:35.876 "dma_device_type": 1 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.876 "dma_device_type": 2 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "system", 00:13:35.876 "dma_device_type": 1 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.876 "dma_device_type": 2 00:13:35.876 } 00:13:35.876 ], 00:13:35.876 "driver_specific": { 00:13:35.876 "raid": { 00:13:35.876 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:35.876 "strip_size_kb": 0, 00:13:35.876 "state": "online", 00:13:35.876 "raid_level": "raid1", 00:13:35.876 "superblock": true, 00:13:35.876 "num_base_bdevs": 4, 00:13:35.876 "num_base_bdevs_discovered": 4, 00:13:35.876 "num_base_bdevs_operational": 4, 00:13:35.876 "base_bdevs_list": [ 00:13:35.876 { 00:13:35.876 "name": "pt1", 00:13:35.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.876 "is_configured": true, 00:13:35.876 "data_offset": 2048, 00:13:35.876 "data_size": 63488 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "name": "pt2", 00:13:35.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.876 "is_configured": true, 00:13:35.876 "data_offset": 2048, 00:13:35.876 "data_size": 63488 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "name": "pt3", 00:13:35.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.876 "is_configured": true, 00:13:35.876 "data_offset": 2048, 00:13:35.876 "data_size": 63488 00:13:35.876 }, 00:13:35.876 { 00:13:35.876 "name": "pt4", 00:13:35.876 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.876 "is_configured": true, 00:13:35.876 "data_offset": 2048, 00:13:35.876 "data_size": 63488 00:13:35.876 } 00:13:35.876 ] 00:13:35.876 } 00:13:35.876 } 00:13:35.876 }' 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.876 pt2 00:13:35.876 pt3 00:13:35.876 pt4' 00:13:35.876 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.135 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.136 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.136 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.136 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:36.136 [2024-10-11 09:47:20.752231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9515bf2e-5325-4f0c-95c2-addc431af9a3 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9515bf2e-5325-4f0c-95c2-addc431af9a3 ']' 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 [2024-10-11 09:47:20.799900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.395 [2024-10-11 09:47:20.800044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.395 [2024-10-11 09:47:20.800196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.395 [2024-10-11 09:47:20.800327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.395 [2024-10-11 09:47:20.800389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.395 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.395 [2024-10-11 09:47:20.955655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:36.395 [2024-10-11 09:47:20.958248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:36.395 [2024-10-11 09:47:20.958365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:36.395 [2024-10-11 09:47:20.958463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:36.395 [2024-10-11 09:47:20.958549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:36.395 [2024-10-11 09:47:20.958642] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:36.395 [2024-10-11 09:47:20.958699] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:36.395 [2024-10-11 09:47:20.958782] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:36.395 [2024-10-11 09:47:20.958846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.395 [2024-10-11 09:47:20.958880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:36.395 request: 00:13:36.395 { 00:13:36.395 "name": "raid_bdev1", 00:13:36.395 "raid_level": "raid1", 00:13:36.395 "base_bdevs": [ 00:13:36.395 "malloc1", 00:13:36.395 "malloc2", 00:13:36.395 "malloc3", 00:13:36.395 "malloc4" 00:13:36.395 ], 00:13:36.395 "superblock": false, 00:13:36.395 "method": "bdev_raid_create", 00:13:36.396 "req_id": 1 00:13:36.396 } 00:13:36.396 Got JSON-RPC error response 00:13:36.396 response: 00:13:36.396 { 00:13:36.396 "code": -17, 00:13:36.396 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:36.396 } 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.396 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.396 [2024-10-11 09:47:21.007591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.396 [2024-10-11 09:47:21.007778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.396 [2024-10-11 09:47:21.007820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:36.396 [2024-10-11 09:47:21.007867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.396 [2024-10-11 09:47:21.010381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.396 [2024-10-11 09:47:21.010462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.396 [2024-10-11 09:47:21.010570] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:36.396 [2024-10-11 09:47:21.010655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.396 pt1 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.396 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.655 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.655 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.655 "name": "raid_bdev1", 00:13:36.655 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:36.655 "strip_size_kb": 0, 00:13:36.655 "state": "configuring", 00:13:36.655 "raid_level": "raid1", 00:13:36.655 "superblock": true, 00:13:36.655 "num_base_bdevs": 4, 00:13:36.655 "num_base_bdevs_discovered": 1, 00:13:36.655 "num_base_bdevs_operational": 4, 00:13:36.655 "base_bdevs_list": [ 00:13:36.655 { 00:13:36.655 "name": "pt1", 00:13:36.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.655 "is_configured": true, 00:13:36.655 "data_offset": 2048, 00:13:36.655 "data_size": 63488 00:13:36.655 }, 00:13:36.655 { 00:13:36.655 "name": null, 00:13:36.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.655 "is_configured": false, 00:13:36.655 "data_offset": 2048, 00:13:36.655 "data_size": 63488 00:13:36.655 }, 00:13:36.655 { 00:13:36.655 "name": null, 00:13:36.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.655 "is_configured": false, 00:13:36.655 "data_offset": 2048, 00:13:36.655 "data_size": 63488 00:13:36.655 }, 00:13:36.655 { 00:13:36.655 "name": null, 00:13:36.655 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.655 "is_configured": false, 00:13:36.655 "data_offset": 2048, 00:13:36.655 "data_size": 63488 00:13:36.655 } 00:13:36.655 ] 00:13:36.655 }' 00:13:36.655 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.655 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.913 [2024-10-11 09:47:21.414842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.913 [2024-10-11 09:47:21.415010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.913 [2024-10-11 09:47:21.415051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:36.913 [2024-10-11 09:47:21.415084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.913 [2024-10-11 09:47:21.415624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.913 [2024-10-11 09:47:21.415695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.913 [2024-10-11 09:47:21.415860] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.913 [2024-10-11 09:47:21.415945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.913 pt2 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.913 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 [2024-10-11 09:47:21.426904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.914 "name": "raid_bdev1", 00:13:36.914 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:36.914 "strip_size_kb": 0, 00:13:36.914 "state": "configuring", 00:13:36.914 "raid_level": "raid1", 00:13:36.914 "superblock": true, 00:13:36.914 "num_base_bdevs": 4, 00:13:36.914 "num_base_bdevs_discovered": 1, 00:13:36.914 "num_base_bdevs_operational": 4, 00:13:36.914 "base_bdevs_list": [ 00:13:36.914 { 00:13:36.914 "name": "pt1", 00:13:36.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.914 "is_configured": true, 00:13:36.914 "data_offset": 2048, 00:13:36.914 "data_size": 63488 00:13:36.914 }, 00:13:36.914 { 00:13:36.914 "name": null, 00:13:36.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.914 "is_configured": false, 00:13:36.914 "data_offset": 0, 00:13:36.914 "data_size": 63488 00:13:36.914 }, 00:13:36.914 { 00:13:36.914 "name": null, 00:13:36.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.914 "is_configured": false, 00:13:36.914 "data_offset": 2048, 00:13:36.914 "data_size": 63488 00:13:36.914 }, 00:13:36.914 { 00:13:36.914 "name": null, 00:13:36.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.914 "is_configured": false, 00:13:36.914 "data_offset": 2048, 00:13:36.914 "data_size": 63488 00:13:36.914 } 00:13:36.914 ] 00:13:36.914 }' 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.914 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 [2024-10-11 09:47:21.890057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.482 [2024-10-11 09:47:21.890217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.482 [2024-10-11 09:47:21.890268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:37.482 [2024-10-11 09:47:21.890303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.482 [2024-10-11 09:47:21.890828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.482 [2024-10-11 09:47:21.890888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.482 [2024-10-11 09:47:21.891013] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:37.482 [2024-10-11 09:47:21.891070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.482 pt2 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 [2024-10-11 09:47:21.902013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:37.482 [2024-10-11 09:47:21.902125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.482 [2024-10-11 09:47:21.902165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:37.482 [2024-10-11 09:47:21.902194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.482 [2024-10-11 09:47:21.902614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.482 [2024-10-11 09:47:21.902668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:37.482 [2024-10-11 09:47:21.902774] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:37.482 [2024-10-11 09:47:21.902821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.482 pt3 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 [2024-10-11 09:47:21.913930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:37.482 [2024-10-11 09:47:21.914014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.482 [2024-10-11 09:47:21.914044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:37.482 [2024-10-11 09:47:21.914069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.482 [2024-10-11 09:47:21.914448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.482 [2024-10-11 09:47:21.914502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:37.482 [2024-10-11 09:47:21.914584] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:37.482 [2024-10-11 09:47:21.914626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:37.482 [2024-10-11 09:47:21.914793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:37.482 [2024-10-11 09:47:21.914832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.482 [2024-10-11 09:47:21.915125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:37.482 [2024-10-11 09:47:21.915319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:37.482 [2024-10-11 09:47:21.915360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:37.482 [2024-10-11 09:47:21.915516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.482 pt4 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.482 "name": "raid_bdev1", 00:13:37.482 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:37.482 "strip_size_kb": 0, 00:13:37.482 "state": "online", 00:13:37.482 "raid_level": "raid1", 00:13:37.482 "superblock": true, 00:13:37.482 "num_base_bdevs": 4, 00:13:37.482 "num_base_bdevs_discovered": 4, 00:13:37.482 "num_base_bdevs_operational": 4, 00:13:37.482 "base_bdevs_list": [ 00:13:37.482 { 00:13:37.482 "name": "pt1", 00:13:37.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.482 "is_configured": true, 00:13:37.482 "data_offset": 2048, 00:13:37.482 "data_size": 63488 00:13:37.482 }, 00:13:37.482 { 00:13:37.482 "name": "pt2", 00:13:37.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.482 "is_configured": true, 00:13:37.482 "data_offset": 2048, 00:13:37.482 "data_size": 63488 00:13:37.482 }, 00:13:37.482 { 00:13:37.482 "name": "pt3", 00:13:37.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.482 "is_configured": true, 00:13:37.482 "data_offset": 2048, 00:13:37.482 "data_size": 63488 00:13:37.482 }, 00:13:37.482 { 00:13:37.482 "name": "pt4", 00:13:37.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.482 "is_configured": true, 00:13:37.482 "data_offset": 2048, 00:13:37.482 "data_size": 63488 00:13:37.482 } 00:13:37.482 ] 00:13:37.482 }' 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.482 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 [2024-10-11 09:47:22.401572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.882 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.882 "name": "raid_bdev1", 00:13:37.882 "aliases": [ 00:13:37.882 "9515bf2e-5325-4f0c-95c2-addc431af9a3" 00:13:37.882 ], 00:13:37.882 "product_name": "Raid Volume", 00:13:37.882 "block_size": 512, 00:13:37.882 "num_blocks": 63488, 00:13:37.882 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:37.882 "assigned_rate_limits": { 00:13:37.882 "rw_ios_per_sec": 0, 00:13:37.882 "rw_mbytes_per_sec": 0, 00:13:37.882 "r_mbytes_per_sec": 0, 00:13:37.882 "w_mbytes_per_sec": 0 00:13:37.882 }, 00:13:37.882 "claimed": false, 00:13:37.882 "zoned": false, 00:13:37.882 "supported_io_types": { 00:13:37.882 "read": true, 00:13:37.882 "write": true, 00:13:37.882 "unmap": false, 00:13:37.882 "flush": false, 00:13:37.882 "reset": true, 00:13:37.882 "nvme_admin": false, 00:13:37.882 "nvme_io": false, 00:13:37.882 "nvme_io_md": false, 00:13:37.882 "write_zeroes": true, 00:13:37.882 "zcopy": false, 00:13:37.882 "get_zone_info": false, 00:13:37.882 "zone_management": false, 00:13:37.882 "zone_append": false, 00:13:37.882 "compare": false, 00:13:37.882 "compare_and_write": false, 00:13:37.882 "abort": false, 00:13:37.882 "seek_hole": false, 00:13:37.882 "seek_data": false, 00:13:37.882 "copy": false, 00:13:37.882 "nvme_iov_md": false 00:13:37.882 }, 00:13:37.882 "memory_domains": [ 00:13:37.882 { 00:13:37.882 "dma_device_id": "system", 00:13:37.882 "dma_device_type": 1 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.882 "dma_device_type": 2 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "system", 00:13:37.882 "dma_device_type": 1 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.882 "dma_device_type": 2 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "system", 00:13:37.882 "dma_device_type": 1 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.882 "dma_device_type": 2 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "system", 00:13:37.882 "dma_device_type": 1 00:13:37.882 }, 00:13:37.882 { 00:13:37.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.882 "dma_device_type": 2 00:13:37.882 } 00:13:37.882 ], 00:13:37.883 "driver_specific": { 00:13:37.883 "raid": { 00:13:37.883 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:37.883 "strip_size_kb": 0, 00:13:37.883 "state": "online", 00:13:37.883 "raid_level": "raid1", 00:13:37.883 "superblock": true, 00:13:37.883 "num_base_bdevs": 4, 00:13:37.883 "num_base_bdevs_discovered": 4, 00:13:37.883 "num_base_bdevs_operational": 4, 00:13:37.883 "base_bdevs_list": [ 00:13:37.883 { 00:13:37.883 "name": "pt1", 00:13:37.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.883 "is_configured": true, 00:13:37.883 "data_offset": 2048, 00:13:37.883 "data_size": 63488 00:13:37.883 }, 00:13:37.883 { 00:13:37.883 "name": "pt2", 00:13:37.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.883 "is_configured": true, 00:13:37.883 "data_offset": 2048, 00:13:37.883 "data_size": 63488 00:13:37.883 }, 00:13:37.883 { 00:13:37.883 "name": "pt3", 00:13:37.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.883 "is_configured": true, 00:13:37.883 "data_offset": 2048, 00:13:37.883 "data_size": 63488 00:13:37.883 }, 00:13:37.883 { 00:13:37.883 "name": "pt4", 00:13:37.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.883 "is_configured": true, 00:13:37.883 "data_offset": 2048, 00:13:37.883 "data_size": 63488 00:13:37.883 } 00:13:37.883 ] 00:13:37.883 } 00:13:37.883 } 00:13:37.883 }' 00:13:37.883 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.883 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:37.883 pt2 00:13:37.883 pt3 00:13:37.883 pt4' 00:13:37.883 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:38.141 [2024-10-11 09:47:22.744970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.141 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9515bf2e-5325-4f0c-95c2-addc431af9a3 '!=' 9515bf2e-5325-4f0c-95c2-addc431af9a3 ']' 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.400 [2024-10-11 09:47:22.792621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.400 "name": "raid_bdev1", 00:13:38.400 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:38.400 "strip_size_kb": 0, 00:13:38.400 "state": "online", 00:13:38.400 "raid_level": "raid1", 00:13:38.400 "superblock": true, 00:13:38.400 "num_base_bdevs": 4, 00:13:38.400 "num_base_bdevs_discovered": 3, 00:13:38.400 "num_base_bdevs_operational": 3, 00:13:38.400 "base_bdevs_list": [ 00:13:38.400 { 00:13:38.400 "name": null, 00:13:38.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.400 "is_configured": false, 00:13:38.400 "data_offset": 0, 00:13:38.400 "data_size": 63488 00:13:38.400 }, 00:13:38.400 { 00:13:38.400 "name": "pt2", 00:13:38.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.400 "is_configured": true, 00:13:38.400 "data_offset": 2048, 00:13:38.400 "data_size": 63488 00:13:38.400 }, 00:13:38.400 { 00:13:38.400 "name": "pt3", 00:13:38.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.400 "is_configured": true, 00:13:38.400 "data_offset": 2048, 00:13:38.400 "data_size": 63488 00:13:38.400 }, 00:13:38.400 { 00:13:38.400 "name": "pt4", 00:13:38.400 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.400 "is_configured": true, 00:13:38.400 "data_offset": 2048, 00:13:38.400 "data_size": 63488 00:13:38.400 } 00:13:38.400 ] 00:13:38.400 }' 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.400 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.660 [2024-10-11 09:47:23.259841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.660 [2024-10-11 09:47:23.259974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.660 [2024-10-11 09:47:23.260093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.660 [2024-10-11 09:47:23.260200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.660 [2024-10-11 09:47:23.260251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:38.660 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.920 [2024-10-11 09:47:23.359635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.920 [2024-10-11 09:47:23.359846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.920 [2024-10-11 09:47:23.359906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:38.920 [2024-10-11 09:47:23.359942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.920 [2024-10-11 09:47:23.362682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.920 [2024-10-11 09:47:23.362796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.920 [2024-10-11 09:47:23.362937] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.920 [2024-10-11 09:47:23.363029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.920 pt2 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.920 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.921 "name": "raid_bdev1", 00:13:38.921 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:38.921 "strip_size_kb": 0, 00:13:38.921 "state": "configuring", 00:13:38.921 "raid_level": "raid1", 00:13:38.921 "superblock": true, 00:13:38.921 "num_base_bdevs": 4, 00:13:38.921 "num_base_bdevs_discovered": 1, 00:13:38.921 "num_base_bdevs_operational": 3, 00:13:38.921 "base_bdevs_list": [ 00:13:38.921 { 00:13:38.921 "name": null, 00:13:38.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.921 "is_configured": false, 00:13:38.921 "data_offset": 2048, 00:13:38.921 "data_size": 63488 00:13:38.921 }, 00:13:38.921 { 00:13:38.921 "name": "pt2", 00:13:38.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.921 "is_configured": true, 00:13:38.921 "data_offset": 2048, 00:13:38.921 "data_size": 63488 00:13:38.921 }, 00:13:38.921 { 00:13:38.921 "name": null, 00:13:38.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.921 "is_configured": false, 00:13:38.921 "data_offset": 2048, 00:13:38.921 "data_size": 63488 00:13:38.921 }, 00:13:38.921 { 00:13:38.921 "name": null, 00:13:38.921 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.921 "is_configured": false, 00:13:38.921 "data_offset": 2048, 00:13:38.921 "data_size": 63488 00:13:38.921 } 00:13:38.921 ] 00:13:38.921 }' 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.921 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.181 [2024-10-11 09:47:23.802934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.181 [2024-10-11 09:47:23.803116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.181 [2024-10-11 09:47:23.803163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:39.181 [2024-10-11 09:47:23.803196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.181 [2024-10-11 09:47:23.803754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.181 [2024-10-11 09:47:23.803818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.181 [2024-10-11 09:47:23.803946] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:39.181 [2024-10-11 09:47:23.803992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.181 pt3 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.181 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.441 "name": "raid_bdev1", 00:13:39.441 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:39.441 "strip_size_kb": 0, 00:13:39.441 "state": "configuring", 00:13:39.441 "raid_level": "raid1", 00:13:39.441 "superblock": true, 00:13:39.441 "num_base_bdevs": 4, 00:13:39.441 "num_base_bdevs_discovered": 2, 00:13:39.441 "num_base_bdevs_operational": 3, 00:13:39.441 "base_bdevs_list": [ 00:13:39.441 { 00:13:39.441 "name": null, 00:13:39.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.441 "is_configured": false, 00:13:39.441 "data_offset": 2048, 00:13:39.441 "data_size": 63488 00:13:39.441 }, 00:13:39.441 { 00:13:39.441 "name": "pt2", 00:13:39.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.441 "is_configured": true, 00:13:39.441 "data_offset": 2048, 00:13:39.441 "data_size": 63488 00:13:39.441 }, 00:13:39.441 { 00:13:39.441 "name": "pt3", 00:13:39.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.441 "is_configured": true, 00:13:39.441 "data_offset": 2048, 00:13:39.441 "data_size": 63488 00:13:39.441 }, 00:13:39.441 { 00:13:39.441 "name": null, 00:13:39.441 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.441 "is_configured": false, 00:13:39.441 "data_offset": 2048, 00:13:39.441 "data_size": 63488 00:13:39.441 } 00:13:39.441 ] 00:13:39.441 }' 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.441 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.701 [2024-10-11 09:47:24.238189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:39.701 [2024-10-11 09:47:24.238385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.701 [2024-10-11 09:47:24.238433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:39.701 [2024-10-11 09:47:24.238467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.701 [2024-10-11 09:47:24.238961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.701 [2024-10-11 09:47:24.239021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:39.701 [2024-10-11 09:47:24.239140] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:39.701 [2024-10-11 09:47:24.239197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:39.701 [2024-10-11 09:47:24.239373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:39.701 [2024-10-11 09:47:24.239410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.701 [2024-10-11 09:47:24.239723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:39.701 [2024-10-11 09:47:24.239958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:39.701 [2024-10-11 09:47:24.240010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:39.701 [2024-10-11 09:47:24.240209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.701 pt4 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.701 "name": "raid_bdev1", 00:13:39.701 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:39.701 "strip_size_kb": 0, 00:13:39.701 "state": "online", 00:13:39.701 "raid_level": "raid1", 00:13:39.701 "superblock": true, 00:13:39.701 "num_base_bdevs": 4, 00:13:39.701 "num_base_bdevs_discovered": 3, 00:13:39.701 "num_base_bdevs_operational": 3, 00:13:39.701 "base_bdevs_list": [ 00:13:39.701 { 00:13:39.701 "name": null, 00:13:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.701 "is_configured": false, 00:13:39.701 "data_offset": 2048, 00:13:39.701 "data_size": 63488 00:13:39.701 }, 00:13:39.701 { 00:13:39.701 "name": "pt2", 00:13:39.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.701 "is_configured": true, 00:13:39.701 "data_offset": 2048, 00:13:39.701 "data_size": 63488 00:13:39.701 }, 00:13:39.701 { 00:13:39.701 "name": "pt3", 00:13:39.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.701 "is_configured": true, 00:13:39.701 "data_offset": 2048, 00:13:39.701 "data_size": 63488 00:13:39.701 }, 00:13:39.701 { 00:13:39.701 "name": "pt4", 00:13:39.701 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.701 "is_configured": true, 00:13:39.701 "data_offset": 2048, 00:13:39.701 "data_size": 63488 00:13:39.701 } 00:13:39.701 ] 00:13:39.701 }' 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.701 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.270 [2024-10-11 09:47:24.729287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.270 [2024-10-11 09:47:24.729403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.270 [2024-10-11 09:47:24.729512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.270 [2024-10-11 09:47:24.729603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.270 [2024-10-11 09:47:24.729650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.270 [2024-10-11 09:47:24.785196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.270 [2024-10-11 09:47:24.785351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.270 [2024-10-11 09:47:24.785396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:40.270 [2024-10-11 09:47:24.785442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.270 [2024-10-11 09:47:24.787962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.270 [2024-10-11 09:47:24.788056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.270 [2024-10-11 09:47:24.788190] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:40.270 [2024-10-11 09:47:24.788274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.270 [2024-10-11 09:47:24.788452] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:40.270 [2024-10-11 09:47:24.788510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.270 [2024-10-11 09:47:24.788551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:40.270 [2024-10-11 09:47:24.788647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.270 [2024-10-11 09:47:24.788797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.270 pt1 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.270 "name": "raid_bdev1", 00:13:40.270 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:40.270 "strip_size_kb": 0, 00:13:40.270 "state": "configuring", 00:13:40.270 "raid_level": "raid1", 00:13:40.270 "superblock": true, 00:13:40.270 "num_base_bdevs": 4, 00:13:40.270 "num_base_bdevs_discovered": 2, 00:13:40.270 "num_base_bdevs_operational": 3, 00:13:40.270 "base_bdevs_list": [ 00:13:40.270 { 00:13:40.270 "name": null, 00:13:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.270 "is_configured": false, 00:13:40.270 "data_offset": 2048, 00:13:40.270 "data_size": 63488 00:13:40.270 }, 00:13:40.270 { 00:13:40.270 "name": "pt2", 00:13:40.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.270 "is_configured": true, 00:13:40.270 "data_offset": 2048, 00:13:40.270 "data_size": 63488 00:13:40.270 }, 00:13:40.270 { 00:13:40.270 "name": "pt3", 00:13:40.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.270 "is_configured": true, 00:13:40.270 "data_offset": 2048, 00:13:40.270 "data_size": 63488 00:13:40.270 }, 00:13:40.270 { 00:13:40.270 "name": null, 00:13:40.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.270 "is_configured": false, 00:13:40.270 "data_offset": 2048, 00:13:40.270 "data_size": 63488 00:13:40.270 } 00:13:40.270 ] 00:13:40.270 }' 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.270 09:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.839 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.839 [2024-10-11 09:47:25.276421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:40.839 [2024-10-11 09:47:25.276589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.839 [2024-10-11 09:47:25.276637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:40.839 [2024-10-11 09:47:25.276670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.839 [2024-10-11 09:47:25.277203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.839 [2024-10-11 09:47:25.277263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:40.839 [2024-10-11 09:47:25.277400] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:40.839 [2024-10-11 09:47:25.277463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:40.839 [2024-10-11 09:47:25.277613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:40.840 [2024-10-11 09:47:25.277650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:40.840 [2024-10-11 09:47:25.277926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:40.840 [2024-10-11 09:47:25.278100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:40.840 [2024-10-11 09:47:25.278142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:40.840 [2024-10-11 09:47:25.278331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.840 pt4 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.840 "name": "raid_bdev1", 00:13:40.840 "uuid": "9515bf2e-5325-4f0c-95c2-addc431af9a3", 00:13:40.840 "strip_size_kb": 0, 00:13:40.840 "state": "online", 00:13:40.840 "raid_level": "raid1", 00:13:40.840 "superblock": true, 00:13:40.840 "num_base_bdevs": 4, 00:13:40.840 "num_base_bdevs_discovered": 3, 00:13:40.840 "num_base_bdevs_operational": 3, 00:13:40.840 "base_bdevs_list": [ 00:13:40.840 { 00:13:40.840 "name": null, 00:13:40.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.840 "is_configured": false, 00:13:40.840 "data_offset": 2048, 00:13:40.840 "data_size": 63488 00:13:40.840 }, 00:13:40.840 { 00:13:40.840 "name": "pt2", 00:13:40.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.840 "is_configured": true, 00:13:40.840 "data_offset": 2048, 00:13:40.840 "data_size": 63488 00:13:40.840 }, 00:13:40.840 { 00:13:40.840 "name": "pt3", 00:13:40.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.840 "is_configured": true, 00:13:40.840 "data_offset": 2048, 00:13:40.840 "data_size": 63488 00:13:40.840 }, 00:13:40.840 { 00:13:40.840 "name": "pt4", 00:13:40.840 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.840 "is_configured": true, 00:13:40.840 "data_offset": 2048, 00:13:40.840 "data_size": 63488 00:13:40.840 } 00:13:40.840 ] 00:13:40.840 }' 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.840 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:41.411 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:41.411 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.411 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.412 [2024-10-11 09:47:25.807811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9515bf2e-5325-4f0c-95c2-addc431af9a3 '!=' 9515bf2e-5325-4f0c-95c2-addc431af9a3 ']' 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75037 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75037 ']' 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75037 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75037 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75037' 00:13:41.412 killing process with pid 75037 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75037 00:13:41.412 [2024-10-11 09:47:25.890129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.412 [2024-10-11 09:47:25.890239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.412 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75037 00:13:41.412 [2024-10-11 09:47:25.890316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.412 [2024-10-11 09:47:25.890330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:41.672 [2024-10-11 09:47:26.288774] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.051 ************************************ 00:13:43.051 END TEST raid_superblock_test 00:13:43.051 ************************************ 00:13:43.051 09:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:43.051 00:13:43.051 real 0m8.765s 00:13:43.051 user 0m13.670s 00:13:43.051 sys 0m1.740s 00:13:43.051 09:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.051 09:47:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.051 09:47:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:43.051 09:47:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:43.051 09:47:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.051 09:47:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.051 ************************************ 00:13:43.051 START TEST raid_read_error_test 00:13:43.051 ************************************ 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OTnAudQpTC 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75530 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75530 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75530 ']' 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.051 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.052 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.052 09:47:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.052 [2024-10-11 09:47:27.619530] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:43.052 [2024-10-11 09:47:27.619842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75530 ] 00:13:43.311 [2024-10-11 09:47:27.793841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.311 [2024-10-11 09:47:27.932722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.570 [2024-10-11 09:47:28.175252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.570 [2024-10-11 09:47:28.175378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.139 BaseBdev1_malloc 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.139 true 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.139 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.139 [2024-10-11 09:47:28.601026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:44.139 [2024-10-11 09:47:28.601190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.139 [2024-10-11 09:47:28.601232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:44.140 [2024-10-11 09:47:28.601266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.140 [2024-10-11 09:47:28.603395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.140 [2024-10-11 09:47:28.603472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.140 BaseBdev1 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 BaseBdev2_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 true 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 [2024-10-11 09:47:28.673284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:44.140 [2024-10-11 09:47:28.673424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.140 [2024-10-11 09:47:28.673460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:44.140 [2024-10-11 09:47:28.673493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.140 [2024-10-11 09:47:28.675622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.140 [2024-10-11 09:47:28.675718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.140 BaseBdev2 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 BaseBdev3_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 true 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.140 [2024-10-11 09:47:28.757477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:44.140 [2024-10-11 09:47:28.757629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.140 [2024-10-11 09:47:28.757666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:44.140 [2024-10-11 09:47:28.757698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.140 [2024-10-11 09:47:28.759875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.140 [2024-10-11 09:47:28.759958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:44.140 BaseBdev3 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.140 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 BaseBdev4_malloc 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 true 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 [2024-10-11 09:47:28.832709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:44.400 [2024-10-11 09:47:28.832797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.400 [2024-10-11 09:47:28.832822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:44.400 [2024-10-11 09:47:28.832839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.400 [2024-10-11 09:47:28.835452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.400 [2024-10-11 09:47:28.835498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:44.400 BaseBdev4 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 [2024-10-11 09:47:28.844752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.400 [2024-10-11 09:47:28.847227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.400 [2024-10-11 09:47:28.847361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.400 [2024-10-11 09:47:28.847452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.400 [2024-10-11 09:47:28.847735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:44.400 [2024-10-11 09:47:28.847812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.400 [2024-10-11 09:47:28.848154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:44.400 [2024-10-11 09:47:28.848397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:44.400 [2024-10-11 09:47:28.848441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:44.400 [2024-10-11 09:47:28.848705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.400 "name": "raid_bdev1", 00:13:44.400 "uuid": "028766f6-4a61-4a7b-b728-546c396e9e7c", 00:13:44.400 "strip_size_kb": 0, 00:13:44.400 "state": "online", 00:13:44.400 "raid_level": "raid1", 00:13:44.400 "superblock": true, 00:13:44.400 "num_base_bdevs": 4, 00:13:44.400 "num_base_bdevs_discovered": 4, 00:13:44.400 "num_base_bdevs_operational": 4, 00:13:44.400 "base_bdevs_list": [ 00:13:44.400 { 00:13:44.400 "name": "BaseBdev1", 00:13:44.400 "uuid": "5ce5f2e1-4a85-56a5-9ddc-3240c3f17446", 00:13:44.400 "is_configured": true, 00:13:44.400 "data_offset": 2048, 00:13:44.400 "data_size": 63488 00:13:44.400 }, 00:13:44.400 { 00:13:44.400 "name": "BaseBdev2", 00:13:44.400 "uuid": "c0a68ad4-da99-58e7-ac01-0381b54c9ac6", 00:13:44.400 "is_configured": true, 00:13:44.400 "data_offset": 2048, 00:13:44.400 "data_size": 63488 00:13:44.400 }, 00:13:44.400 { 00:13:44.400 "name": "BaseBdev3", 00:13:44.400 "uuid": "edcebac3-a50f-5c3d-a22c-af0ffec1aabf", 00:13:44.400 "is_configured": true, 00:13:44.400 "data_offset": 2048, 00:13:44.400 "data_size": 63488 00:13:44.400 }, 00:13:44.400 { 00:13:44.400 "name": "BaseBdev4", 00:13:44.400 "uuid": "79b3e5c6-1a3d-5a47-836a-48616ee6b118", 00:13:44.400 "is_configured": true, 00:13:44.400 "data_offset": 2048, 00:13:44.400 "data_size": 63488 00:13:44.400 } 00:13:44.400 ] 00:13:44.400 }' 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.400 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.969 09:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:44.969 09:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:44.969 [2024-10-11 09:47:29.429782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.907 "name": "raid_bdev1", 00:13:45.907 "uuid": "028766f6-4a61-4a7b-b728-546c396e9e7c", 00:13:45.907 "strip_size_kb": 0, 00:13:45.907 "state": "online", 00:13:45.907 "raid_level": "raid1", 00:13:45.907 "superblock": true, 00:13:45.907 "num_base_bdevs": 4, 00:13:45.907 "num_base_bdevs_discovered": 4, 00:13:45.907 "num_base_bdevs_operational": 4, 00:13:45.907 "base_bdevs_list": [ 00:13:45.907 { 00:13:45.907 "name": "BaseBdev1", 00:13:45.907 "uuid": "5ce5f2e1-4a85-56a5-9ddc-3240c3f17446", 00:13:45.907 "is_configured": true, 00:13:45.907 "data_offset": 2048, 00:13:45.907 "data_size": 63488 00:13:45.907 }, 00:13:45.907 { 00:13:45.907 "name": "BaseBdev2", 00:13:45.907 "uuid": "c0a68ad4-da99-58e7-ac01-0381b54c9ac6", 00:13:45.907 "is_configured": true, 00:13:45.907 "data_offset": 2048, 00:13:45.907 "data_size": 63488 00:13:45.907 }, 00:13:45.907 { 00:13:45.907 "name": "BaseBdev3", 00:13:45.907 "uuid": "edcebac3-a50f-5c3d-a22c-af0ffec1aabf", 00:13:45.907 "is_configured": true, 00:13:45.907 "data_offset": 2048, 00:13:45.907 "data_size": 63488 00:13:45.907 }, 00:13:45.907 { 00:13:45.907 "name": "BaseBdev4", 00:13:45.907 "uuid": "79b3e5c6-1a3d-5a47-836a-48616ee6b118", 00:13:45.907 "is_configured": true, 00:13:45.907 "data_offset": 2048, 00:13:45.907 "data_size": 63488 00:13:45.907 } 00:13:45.907 ] 00:13:45.907 }' 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.907 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.167 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.167 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.167 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.167 [2024-10-11 09:47:30.794724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.167 [2024-10-11 09:47:30.794880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.167 [2024-10-11 09:47:30.798029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.167 [2024-10-11 09:47:30.798143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.167 [2024-10-11 09:47:30.798312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.167 [2024-10-11 09:47:30.798367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:46.436 { 00:13:46.436 "results": [ 00:13:46.436 { 00:13:46.436 "job": "raid_bdev1", 00:13:46.436 "core_mask": "0x1", 00:13:46.436 "workload": "randrw", 00:13:46.436 "percentage": 50, 00:13:46.436 "status": "finished", 00:13:46.436 "queue_depth": 1, 00:13:46.436 "io_size": 131072, 00:13:46.436 "runtime": 1.365479, 00:13:46.436 "iops": 9787.041763366555, 00:13:46.436 "mibps": 1223.3802204208193, 00:13:46.436 "io_failed": 0, 00:13:46.436 "io_timeout": 0, 00:13:46.436 "avg_latency_us": 99.20154334985865, 00:13:46.436 "min_latency_us": 25.041048034934498, 00:13:46.436 "max_latency_us": 1631.2454148471616 00:13:46.436 } 00:13:46.436 ], 00:13:46.436 "core_count": 1 00:13:46.436 } 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75530 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75530 ']' 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75530 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75530 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.436 killing process with pid 75530 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75530' 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75530 00:13:46.436 [2024-10-11 09:47:30.847064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.436 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75530 00:13:46.711 [2024-10-11 09:47:31.194691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OTnAudQpTC 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:48.090 00:13:48.090 real 0m5.067s 00:13:48.090 user 0m5.994s 00:13:48.090 sys 0m0.626s 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.090 09:47:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.090 ************************************ 00:13:48.090 END TEST raid_read_error_test 00:13:48.090 ************************************ 00:13:48.090 09:47:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:48.090 09:47:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:48.090 09:47:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.090 09:47:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.090 ************************************ 00:13:48.090 START TEST raid_write_error_test 00:13:48.090 ************************************ 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LQ5Qd6ZT8i 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75685 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75685 00:13:48.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75685 ']' 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.090 09:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.349 [2024-10-11 09:47:32.770258] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:48.349 [2024-10-11 09:47:32.770410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75685 ] 00:13:48.349 [2024-10-11 09:47:32.928605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.608 [2024-10-11 09:47:33.072336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.866 [2024-10-11 09:47:33.333894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.866 [2024-10-11 09:47:33.333951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.124 BaseBdev1_malloc 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.124 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.125 true 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.125 [2024-10-11 09:47:33.731819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:49.125 [2024-10-11 09:47:33.731984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.125 [2024-10-11 09:47:33.732033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:49.125 [2024-10-11 09:47:33.732075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.125 [2024-10-11 09:47:33.734600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.125 [2024-10-11 09:47:33.734705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.125 BaseBdev1 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.125 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 BaseBdev2_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 true 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 [2024-10-11 09:47:33.808106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:49.384 [2024-10-11 09:47:33.808249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.384 [2024-10-11 09:47:33.808294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:49.384 [2024-10-11 09:47:33.808335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.384 [2024-10-11 09:47:33.810905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.384 [2024-10-11 09:47:33.811016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:49.384 BaseBdev2 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 BaseBdev3_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 true 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 [2024-10-11 09:47:33.895489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:49.384 [2024-10-11 09:47:33.895689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.384 [2024-10-11 09:47:33.895732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:49.384 [2024-10-11 09:47:33.895744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.384 [2024-10-11 09:47:33.898307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.384 [2024-10-11 09:47:33.898356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:49.384 BaseBdev3 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 BaseBdev4_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 true 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 [2024-10-11 09:47:33.972526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:49.384 [2024-10-11 09:47:33.972617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.384 [2024-10-11 09:47:33.972644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:49.384 [2024-10-11 09:47:33.972658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.384 [2024-10-11 09:47:33.975269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.384 [2024-10-11 09:47:33.975337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:49.384 BaseBdev4 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.384 [2024-10-11 09:47:33.984606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.384 [2024-10-11 09:47:33.986916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.384 [2024-10-11 09:47:33.987079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.384 [2024-10-11 09:47:33.987183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:49.384 [2024-10-11 09:47:33.987531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:49.384 [2024-10-11 09:47:33.987593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:49.384 [2024-10-11 09:47:33.987957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:49.384 [2024-10-11 09:47:33.988215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:49.384 [2024-10-11 09:47:33.988263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:49.384 [2024-10-11 09:47:33.988578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.384 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.644 09:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.644 09:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.644 "name": "raid_bdev1", 00:13:49.644 "uuid": "618f2be9-54cb-408b-8d8c-9ec9b8604b14", 00:13:49.644 "strip_size_kb": 0, 00:13:49.644 "state": "online", 00:13:49.644 "raid_level": "raid1", 00:13:49.644 "superblock": true, 00:13:49.644 "num_base_bdevs": 4, 00:13:49.644 "num_base_bdevs_discovered": 4, 00:13:49.644 "num_base_bdevs_operational": 4, 00:13:49.644 "base_bdevs_list": [ 00:13:49.644 { 00:13:49.644 "name": "BaseBdev1", 00:13:49.644 "uuid": "44e43a76-c4f6-5e34-835f-0f39777dacab", 00:13:49.644 "is_configured": true, 00:13:49.644 "data_offset": 2048, 00:13:49.644 "data_size": 63488 00:13:49.644 }, 00:13:49.644 { 00:13:49.644 "name": "BaseBdev2", 00:13:49.644 "uuid": "e9c96fae-e2de-532e-be5e-ff3d2d6db986", 00:13:49.644 "is_configured": true, 00:13:49.644 "data_offset": 2048, 00:13:49.644 "data_size": 63488 00:13:49.644 }, 00:13:49.644 { 00:13:49.644 "name": "BaseBdev3", 00:13:49.644 "uuid": "e9949fd4-5617-5467-a867-86ededd16b16", 00:13:49.644 "is_configured": true, 00:13:49.644 "data_offset": 2048, 00:13:49.644 "data_size": 63488 00:13:49.644 }, 00:13:49.644 { 00:13:49.644 "name": "BaseBdev4", 00:13:49.644 "uuid": "1d227769-460e-5459-8e31-c1eea859a92d", 00:13:49.644 "is_configured": true, 00:13:49.644 "data_offset": 2048, 00:13:49.644 "data_size": 63488 00:13:49.644 } 00:13:49.644 ] 00:13:49.644 }' 00:13:49.644 09:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.644 09:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.904 09:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:49.904 09:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:50.163 [2024-10-11 09:47:34.565535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.100 [2024-10-11 09:47:35.470260] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:51.100 [2024-10-11 09:47:35.470407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.100 [2024-10-11 09:47:35.470688] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.100 "name": "raid_bdev1", 00:13:51.100 "uuid": "618f2be9-54cb-408b-8d8c-9ec9b8604b14", 00:13:51.100 "strip_size_kb": 0, 00:13:51.100 "state": "online", 00:13:51.100 "raid_level": "raid1", 00:13:51.100 "superblock": true, 00:13:51.100 "num_base_bdevs": 4, 00:13:51.100 "num_base_bdevs_discovered": 3, 00:13:51.100 "num_base_bdevs_operational": 3, 00:13:51.100 "base_bdevs_list": [ 00:13:51.100 { 00:13:51.100 "name": null, 00:13:51.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.100 "is_configured": false, 00:13:51.100 "data_offset": 0, 00:13:51.100 "data_size": 63488 00:13:51.100 }, 00:13:51.100 { 00:13:51.100 "name": "BaseBdev2", 00:13:51.100 "uuid": "e9c96fae-e2de-532e-be5e-ff3d2d6db986", 00:13:51.100 "is_configured": true, 00:13:51.100 "data_offset": 2048, 00:13:51.100 "data_size": 63488 00:13:51.100 }, 00:13:51.100 { 00:13:51.100 "name": "BaseBdev3", 00:13:51.100 "uuid": "e9949fd4-5617-5467-a867-86ededd16b16", 00:13:51.100 "is_configured": true, 00:13:51.100 "data_offset": 2048, 00:13:51.100 "data_size": 63488 00:13:51.100 }, 00:13:51.100 { 00:13:51.100 "name": "BaseBdev4", 00:13:51.100 "uuid": "1d227769-460e-5459-8e31-c1eea859a92d", 00:13:51.100 "is_configured": true, 00:13:51.100 "data_offset": 2048, 00:13:51.100 "data_size": 63488 00:13:51.100 } 00:13:51.100 ] 00:13:51.100 }' 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.100 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.359 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.360 [2024-10-11 09:47:35.911245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.360 [2024-10-11 09:47:35.911350] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.360 [2024-10-11 09:47:35.914538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.360 [2024-10-11 09:47:35.914601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.360 [2024-10-11 09:47:35.914719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.360 [2024-10-11 09:47:35.914758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:51.360 { 00:13:51.360 "results": [ 00:13:51.360 { 00:13:51.360 "job": "raid_bdev1", 00:13:51.360 "core_mask": "0x1", 00:13:51.360 "workload": "randrw", 00:13:51.360 "percentage": 50, 00:13:51.360 "status": "finished", 00:13:51.360 "queue_depth": 1, 00:13:51.360 "io_size": 131072, 00:13:51.360 "runtime": 1.346465, 00:13:51.360 "iops": 10021.055133256341, 00:13:51.360 "mibps": 1252.6318916570426, 00:13:51.360 "io_failed": 0, 00:13:51.360 "io_timeout": 0, 00:13:51.360 "avg_latency_us": 96.52398277353582, 00:13:51.360 "min_latency_us": 25.7117903930131, 00:13:51.360 "max_latency_us": 1738.564192139738 00:13:51.360 } 00:13:51.360 ], 00:13:51.360 "core_count": 1 00:13:51.360 } 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75685 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75685 ']' 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75685 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75685 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.360 killing process with pid 75685 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75685' 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75685 00:13:51.360 [2024-10-11 09:47:35.964003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.360 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75685 00:13:51.927 [2024-10-11 09:47:36.324357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LQ5Qd6ZT8i 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:53.308 00:13:53.308 real 0m4.874s 00:13:53.308 user 0m5.766s 00:13:53.308 sys 0m0.644s 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.308 09:47:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.308 ************************************ 00:13:53.308 END TEST raid_write_error_test 00:13:53.308 ************************************ 00:13:53.308 09:47:37 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:53.308 09:47:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:53.308 09:47:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:53.308 09:47:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:53.308 09:47:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.308 09:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.308 ************************************ 00:13:53.308 START TEST raid_rebuild_test 00:13:53.308 ************************************ 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75832 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75832 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75832 ']' 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.308 09:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.308 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.308 Zero copy mechanism will not be used. 00:13:53.308 [2024-10-11 09:47:37.691458] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:53.308 [2024-10-11 09:47:37.691594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75832 ] 00:13:53.308 [2024-10-11 09:47:37.867260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.567 [2024-10-11 09:47:37.991916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.846 [2024-10-11 09:47:38.239520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.846 [2024-10-11 09:47:38.239565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.104 BaseBdev1_malloc 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.104 [2024-10-11 09:47:38.609340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:54.104 [2024-10-11 09:47:38.609434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.104 [2024-10-11 09:47:38.609462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:54.104 [2024-10-11 09:47:38.609476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.104 [2024-10-11 09:47:38.612042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.104 [2024-10-11 09:47:38.612092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.104 BaseBdev1 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.104 BaseBdev2_malloc 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.104 [2024-10-11 09:47:38.673817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:54.104 [2024-10-11 09:47:38.673905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.104 [2024-10-11 09:47:38.673926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:54.104 [2024-10-11 09:47:38.673941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.104 [2024-10-11 09:47:38.676378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.104 BaseBdev2 00:13:54.104 [2024-10-11 09:47:38.676533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.104 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.362 spare_malloc 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.362 spare_delay 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.362 [2024-10-11 09:47:38.763350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.362 [2024-10-11 09:47:38.763523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.362 [2024-10-11 09:47:38.763590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:54.362 [2024-10-11 09:47:38.763636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.362 [2024-10-11 09:47:38.766210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.362 [2024-10-11 09:47:38.766319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.362 spare 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.362 [2024-10-11 09:47:38.775389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.362 [2024-10-11 09:47:38.777703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.362 [2024-10-11 09:47:38.777906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.362 [2024-10-11 09:47:38.777971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:54.362 [2024-10-11 09:47:38.778359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:54.362 [2024-10-11 09:47:38.778634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.362 [2024-10-11 09:47:38.778683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.362 [2024-10-11 09:47:38.778940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.362 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.363 "name": "raid_bdev1", 00:13:54.363 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:13:54.363 "strip_size_kb": 0, 00:13:54.363 "state": "online", 00:13:54.363 "raid_level": "raid1", 00:13:54.363 "superblock": false, 00:13:54.363 "num_base_bdevs": 2, 00:13:54.363 "num_base_bdevs_discovered": 2, 00:13:54.363 "num_base_bdevs_operational": 2, 00:13:54.363 "base_bdevs_list": [ 00:13:54.363 { 00:13:54.363 "name": "BaseBdev1", 00:13:54.363 "uuid": "c668d354-041d-54e0-a464-278ef3b05b38", 00:13:54.363 "is_configured": true, 00:13:54.363 "data_offset": 0, 00:13:54.363 "data_size": 65536 00:13:54.363 }, 00:13:54.363 { 00:13:54.363 "name": "BaseBdev2", 00:13:54.363 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:13:54.363 "is_configured": true, 00:13:54.363 "data_offset": 0, 00:13:54.363 "data_size": 65536 00:13:54.363 } 00:13:54.363 ] 00:13:54.363 }' 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.363 09:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:54.929 [2024-10-11 09:47:39.270887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:54.929 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.930 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.930 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.930 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:54.930 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.930 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.930 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:55.188 [2024-10-11 09:47:39.614034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:55.188 /dev/nbd0 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.188 1+0 records in 00:13:55.188 1+0 records out 00:13:55.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649673 s, 6.3 MB/s 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:55.188 09:47:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:00.451 65536+0 records in 00:14:00.451 65536+0 records out 00:14:00.451 33554432 bytes (34 MB, 32 MiB) copied, 5.00065 s, 6.7 MB/s 00:14:00.451 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:00.451 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.451 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.451 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.451 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.452 [2024-10-11 09:47:44.931955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.452 [2024-10-11 09:47:44.952265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.452 09:47:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.452 09:47:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.452 "name": "raid_bdev1", 00:14:00.452 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:00.452 "strip_size_kb": 0, 00:14:00.452 "state": "online", 00:14:00.452 "raid_level": "raid1", 00:14:00.452 "superblock": false, 00:14:00.452 "num_base_bdevs": 2, 00:14:00.452 "num_base_bdevs_discovered": 1, 00:14:00.452 "num_base_bdevs_operational": 1, 00:14:00.452 "base_bdevs_list": [ 00:14:00.452 { 00:14:00.452 "name": null, 00:14:00.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.452 "is_configured": false, 00:14:00.452 "data_offset": 0, 00:14:00.452 "data_size": 65536 00:14:00.452 }, 00:14:00.452 { 00:14:00.452 "name": "BaseBdev2", 00:14:00.452 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:00.452 "is_configured": true, 00:14:00.452 "data_offset": 0, 00:14:00.452 "data_size": 65536 00:14:00.452 } 00:14:00.452 ] 00:14:00.452 }' 00:14:00.452 09:47:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.452 09:47:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.019 09:47:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.019 09:47:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.019 09:47:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.019 [2024-10-11 09:47:45.407519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.019 [2024-10-11 09:47:45.428306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:01.019 09:47:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.019 09:47:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:01.019 [2024-10-11 09:47:45.430373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.954 "name": "raid_bdev1", 00:14:01.954 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:01.954 "strip_size_kb": 0, 00:14:01.954 "state": "online", 00:14:01.954 "raid_level": "raid1", 00:14:01.954 "superblock": false, 00:14:01.954 "num_base_bdevs": 2, 00:14:01.954 "num_base_bdevs_discovered": 2, 00:14:01.954 "num_base_bdevs_operational": 2, 00:14:01.954 "process": { 00:14:01.954 "type": "rebuild", 00:14:01.954 "target": "spare", 00:14:01.954 "progress": { 00:14:01.954 "blocks": 20480, 00:14:01.954 "percent": 31 00:14:01.954 } 00:14:01.954 }, 00:14:01.954 "base_bdevs_list": [ 00:14:01.954 { 00:14:01.954 "name": "spare", 00:14:01.954 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:01.954 "is_configured": true, 00:14:01.954 "data_offset": 0, 00:14:01.954 "data_size": 65536 00:14:01.954 }, 00:14:01.954 { 00:14:01.954 "name": "BaseBdev2", 00:14:01.954 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:01.954 "is_configured": true, 00:14:01.954 "data_offset": 0, 00:14:01.954 "data_size": 65536 00:14:01.954 } 00:14:01.954 ] 00:14:01.954 }' 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.954 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.213 [2024-10-11 09:47:46.597493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.213 [2024-10-11 09:47:46.636469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.213 [2024-10-11 09:47:46.636669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.213 [2024-10-11 09:47:46.636708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.213 [2024-10-11 09:47:46.636733] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.213 "name": "raid_bdev1", 00:14:02.213 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:02.213 "strip_size_kb": 0, 00:14:02.213 "state": "online", 00:14:02.213 "raid_level": "raid1", 00:14:02.213 "superblock": false, 00:14:02.213 "num_base_bdevs": 2, 00:14:02.213 "num_base_bdevs_discovered": 1, 00:14:02.213 "num_base_bdevs_operational": 1, 00:14:02.213 "base_bdevs_list": [ 00:14:02.213 { 00:14:02.213 "name": null, 00:14:02.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.213 "is_configured": false, 00:14:02.213 "data_offset": 0, 00:14:02.213 "data_size": 65536 00:14:02.213 }, 00:14:02.213 { 00:14:02.213 "name": "BaseBdev2", 00:14:02.213 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:02.213 "is_configured": true, 00:14:02.213 "data_offset": 0, 00:14:02.213 "data_size": 65536 00:14:02.213 } 00:14:02.213 ] 00:14:02.213 }' 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.213 09:47:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.779 "name": "raid_bdev1", 00:14:02.779 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:02.779 "strip_size_kb": 0, 00:14:02.779 "state": "online", 00:14:02.779 "raid_level": "raid1", 00:14:02.779 "superblock": false, 00:14:02.779 "num_base_bdevs": 2, 00:14:02.779 "num_base_bdevs_discovered": 1, 00:14:02.779 "num_base_bdevs_operational": 1, 00:14:02.779 "base_bdevs_list": [ 00:14:02.779 { 00:14:02.779 "name": null, 00:14:02.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.779 "is_configured": false, 00:14:02.779 "data_offset": 0, 00:14:02.779 "data_size": 65536 00:14:02.779 }, 00:14:02.779 { 00:14:02.779 "name": "BaseBdev2", 00:14:02.779 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:02.779 "is_configured": true, 00:14:02.779 "data_offset": 0, 00:14:02.779 "data_size": 65536 00:14:02.779 } 00:14:02.779 ] 00:14:02.779 }' 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.779 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.780 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.780 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.780 09:47:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.780 09:47:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.780 [2024-10-11 09:47:47.273513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.780 [2024-10-11 09:47:47.291363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:02.780 09:47:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.780 09:47:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:02.780 [2024-10-11 09:47:47.293432] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.715 09:47:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.716 09:47:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.975 "name": "raid_bdev1", 00:14:03.975 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:03.975 "strip_size_kb": 0, 00:14:03.975 "state": "online", 00:14:03.975 "raid_level": "raid1", 00:14:03.975 "superblock": false, 00:14:03.975 "num_base_bdevs": 2, 00:14:03.975 "num_base_bdevs_discovered": 2, 00:14:03.975 "num_base_bdevs_operational": 2, 00:14:03.975 "process": { 00:14:03.975 "type": "rebuild", 00:14:03.975 "target": "spare", 00:14:03.975 "progress": { 00:14:03.975 "blocks": 20480, 00:14:03.975 "percent": 31 00:14:03.975 } 00:14:03.975 }, 00:14:03.975 "base_bdevs_list": [ 00:14:03.975 { 00:14:03.975 "name": "spare", 00:14:03.975 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:03.975 "is_configured": true, 00:14:03.975 "data_offset": 0, 00:14:03.975 "data_size": 65536 00:14:03.975 }, 00:14:03.975 { 00:14:03.975 "name": "BaseBdev2", 00:14:03.975 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:03.975 "is_configured": true, 00:14:03.975 "data_offset": 0, 00:14:03.975 "data_size": 65536 00:14:03.975 } 00:14:03.975 ] 00:14:03.975 }' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=384 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.975 "name": "raid_bdev1", 00:14:03.975 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:03.975 "strip_size_kb": 0, 00:14:03.975 "state": "online", 00:14:03.975 "raid_level": "raid1", 00:14:03.975 "superblock": false, 00:14:03.975 "num_base_bdevs": 2, 00:14:03.975 "num_base_bdevs_discovered": 2, 00:14:03.975 "num_base_bdevs_operational": 2, 00:14:03.975 "process": { 00:14:03.975 "type": "rebuild", 00:14:03.975 "target": "spare", 00:14:03.975 "progress": { 00:14:03.975 "blocks": 22528, 00:14:03.975 "percent": 34 00:14:03.975 } 00:14:03.975 }, 00:14:03.975 "base_bdevs_list": [ 00:14:03.975 { 00:14:03.975 "name": "spare", 00:14:03.975 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:03.975 "is_configured": true, 00:14:03.975 "data_offset": 0, 00:14:03.975 "data_size": 65536 00:14:03.975 }, 00:14:03.975 { 00:14:03.975 "name": "BaseBdev2", 00:14:03.975 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:03.975 "is_configured": true, 00:14:03.975 "data_offset": 0, 00:14:03.975 "data_size": 65536 00:14:03.975 } 00:14:03.975 ] 00:14:03.975 }' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.975 09:47:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.355 "name": "raid_bdev1", 00:14:05.355 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:05.355 "strip_size_kb": 0, 00:14:05.355 "state": "online", 00:14:05.355 "raid_level": "raid1", 00:14:05.355 "superblock": false, 00:14:05.355 "num_base_bdevs": 2, 00:14:05.355 "num_base_bdevs_discovered": 2, 00:14:05.355 "num_base_bdevs_operational": 2, 00:14:05.355 "process": { 00:14:05.355 "type": "rebuild", 00:14:05.355 "target": "spare", 00:14:05.355 "progress": { 00:14:05.355 "blocks": 45056, 00:14:05.355 "percent": 68 00:14:05.355 } 00:14:05.355 }, 00:14:05.355 "base_bdevs_list": [ 00:14:05.355 { 00:14:05.355 "name": "spare", 00:14:05.355 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:05.355 "is_configured": true, 00:14:05.355 "data_offset": 0, 00:14:05.355 "data_size": 65536 00:14:05.355 }, 00:14:05.355 { 00:14:05.355 "name": "BaseBdev2", 00:14:05.355 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:05.355 "is_configured": true, 00:14:05.355 "data_offset": 0, 00:14:05.355 "data_size": 65536 00:14:05.355 } 00:14:05.355 ] 00:14:05.355 }' 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.355 09:47:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.923 [2024-10-11 09:47:50.509354] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.923 [2024-10-11 09:47:50.509559] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.923 [2024-10-11 09:47:50.509635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.182 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.183 "name": "raid_bdev1", 00:14:06.183 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:06.183 "strip_size_kb": 0, 00:14:06.183 "state": "online", 00:14:06.183 "raid_level": "raid1", 00:14:06.183 "superblock": false, 00:14:06.183 "num_base_bdevs": 2, 00:14:06.183 "num_base_bdevs_discovered": 2, 00:14:06.183 "num_base_bdevs_operational": 2, 00:14:06.183 "base_bdevs_list": [ 00:14:06.183 { 00:14:06.183 "name": "spare", 00:14:06.183 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:06.183 "is_configured": true, 00:14:06.183 "data_offset": 0, 00:14:06.183 "data_size": 65536 00:14:06.183 }, 00:14:06.183 { 00:14:06.183 "name": "BaseBdev2", 00:14:06.183 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:06.183 "is_configured": true, 00:14:06.183 "data_offset": 0, 00:14:06.183 "data_size": 65536 00:14:06.183 } 00:14:06.183 ] 00:14:06.183 }' 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:06.183 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.443 "name": "raid_bdev1", 00:14:06.443 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:06.443 "strip_size_kb": 0, 00:14:06.443 "state": "online", 00:14:06.443 "raid_level": "raid1", 00:14:06.443 "superblock": false, 00:14:06.443 "num_base_bdevs": 2, 00:14:06.443 "num_base_bdevs_discovered": 2, 00:14:06.443 "num_base_bdevs_operational": 2, 00:14:06.443 "base_bdevs_list": [ 00:14:06.443 { 00:14:06.443 "name": "spare", 00:14:06.443 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:06.443 "is_configured": true, 00:14:06.443 "data_offset": 0, 00:14:06.443 "data_size": 65536 00:14:06.443 }, 00:14:06.443 { 00:14:06.443 "name": "BaseBdev2", 00:14:06.443 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:06.443 "is_configured": true, 00:14:06.443 "data_offset": 0, 00:14:06.443 "data_size": 65536 00:14:06.443 } 00:14:06.443 ] 00:14:06.443 }' 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.443 09:47:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.443 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.443 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.443 "name": "raid_bdev1", 00:14:06.443 "uuid": "6b3e020c-4e39-47b2-9a8d-c5f6a8e34493", 00:14:06.443 "strip_size_kb": 0, 00:14:06.443 "state": "online", 00:14:06.443 "raid_level": "raid1", 00:14:06.443 "superblock": false, 00:14:06.443 "num_base_bdevs": 2, 00:14:06.443 "num_base_bdevs_discovered": 2, 00:14:06.443 "num_base_bdevs_operational": 2, 00:14:06.443 "base_bdevs_list": [ 00:14:06.443 { 00:14:06.443 "name": "spare", 00:14:06.443 "uuid": "32eb2a9a-5534-5d06-a447-879c9469c2c6", 00:14:06.443 "is_configured": true, 00:14:06.443 "data_offset": 0, 00:14:06.443 "data_size": 65536 00:14:06.443 }, 00:14:06.443 { 00:14:06.443 "name": "BaseBdev2", 00:14:06.443 "uuid": "b779c8cd-7b5b-53c7-a5f8-6cc6a9265755", 00:14:06.443 "is_configured": true, 00:14:06.443 "data_offset": 0, 00:14:06.443 "data_size": 65536 00:14:06.443 } 00:14:06.443 ] 00:14:06.443 }' 00:14:06.443 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.443 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.013 [2024-10-11 09:47:51.453242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.013 [2024-10-11 09:47:51.453333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.013 [2024-10-11 09:47:51.453456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.013 [2024-10-11 09:47:51.453557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.013 [2024-10-11 09:47:51.453604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:07.013 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.014 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:07.274 /dev/nbd0 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.274 1+0 records in 00:14:07.274 1+0 records out 00:14:07.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546847 s, 7.5 MB/s 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.274 09:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:07.534 /dev/nbd1 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.534 1+0 records in 00:14:07.534 1+0 records out 00:14:07.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382318 s, 10.7 MB/s 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.534 09:47:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.793 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.052 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75832 00:14:08.311 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75832 ']' 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75832 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75832 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75832' 00:14:08.312 killing process with pid 75832 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75832 00:14:08.312 Received shutdown signal, test time was about 60.000000 seconds 00:14:08.312 00:14:08.312 Latency(us) 00:14:08.312 [2024-10-11T09:47:52.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.312 [2024-10-11T09:47:52.944Z] =================================================================================================================== 00:14:08.312 [2024-10-11T09:47:52.944Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.312 [2024-10-11 09:47:52.772583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.312 09:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75832 00:14:08.572 [2024-10-11 09:47:53.073715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:09.951 00:14:09.951 real 0m16.585s 00:14:09.951 user 0m18.496s 00:14:09.951 sys 0m3.428s 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.951 ************************************ 00:14:09.951 END TEST raid_rebuild_test 00:14:09.951 ************************************ 00:14:09.951 09:47:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:09.951 09:47:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:09.951 09:47:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.951 09:47:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.951 ************************************ 00:14:09.951 START TEST raid_rebuild_test_sb 00:14:09.951 ************************************ 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76286 00:14:09.951 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76286 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76286 ']' 00:14:09.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.952 09:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:09.952 Zero copy mechanism will not be used. 00:14:09.952 [2024-10-11 09:47:54.356626] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:09.952 [2024-10-11 09:47:54.356762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76286 ] 00:14:09.952 [2024-10-11 09:47:54.516466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.211 [2024-10-11 09:47:54.641817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.470 [2024-10-11 09:47:54.872525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.470 [2024-10-11 09:47:54.872599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.730 BaseBdev1_malloc 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.730 [2024-10-11 09:47:55.291748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:10.730 [2024-10-11 09:47:55.291825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.730 [2024-10-11 09:47:55.291850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:10.730 [2024-10-11 09:47:55.291862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.730 [2024-10-11 09:47:55.294123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.730 [2024-10-11 09:47:55.294164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.730 BaseBdev1 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.730 BaseBdev2_malloc 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.730 [2024-10-11 09:47:55.350840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:10.730 [2024-10-11 09:47:55.350905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.730 [2024-10-11 09:47:55.350924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:10.730 [2024-10-11 09:47:55.350936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.730 [2024-10-11 09:47:55.353133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.730 [2024-10-11 09:47:55.353234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.730 BaseBdev2 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.730 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.990 spare_malloc 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.990 spare_delay 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.990 [2024-10-11 09:47:55.433148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.990 [2024-10-11 09:47:55.433251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.990 [2024-10-11 09:47:55.433276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:10.990 [2024-10-11 09:47:55.433287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.990 [2024-10-11 09:47:55.435420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.990 [2024-10-11 09:47:55.435459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.990 spare 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.990 [2024-10-11 09:47:55.445181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.990 [2024-10-11 09:47:55.446940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.990 [2024-10-11 09:47:55.447106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:10.990 [2024-10-11 09:47:55.447122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.990 [2024-10-11 09:47:55.447373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:10.990 [2024-10-11 09:47:55.447527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:10.990 [2024-10-11 09:47:55.447536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:10.990 [2024-10-11 09:47:55.447690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.990 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.991 "name": "raid_bdev1", 00:14:10.991 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:10.991 "strip_size_kb": 0, 00:14:10.991 "state": "online", 00:14:10.991 "raid_level": "raid1", 00:14:10.991 "superblock": true, 00:14:10.991 "num_base_bdevs": 2, 00:14:10.991 "num_base_bdevs_discovered": 2, 00:14:10.991 "num_base_bdevs_operational": 2, 00:14:10.991 "base_bdevs_list": [ 00:14:10.991 { 00:14:10.991 "name": "BaseBdev1", 00:14:10.991 "uuid": "08008818-989b-5474-bbf7-b7b023fa8889", 00:14:10.991 "is_configured": true, 00:14:10.991 "data_offset": 2048, 00:14:10.991 "data_size": 63488 00:14:10.991 }, 00:14:10.991 { 00:14:10.991 "name": "BaseBdev2", 00:14:10.991 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:10.991 "is_configured": true, 00:14:10.991 "data_offset": 2048, 00:14:10.991 "data_size": 63488 00:14:10.991 } 00:14:10.991 ] 00:14:10.991 }' 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.991 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.561 [2024-10-11 09:47:55.924784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.561 09:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.561 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:11.820 [2024-10-11 09:47:56.216027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:11.820 /dev/nbd0 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:11.820 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.821 1+0 records in 00:14:11.821 1+0 records out 00:14:11.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322227 s, 12.7 MB/s 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:11.821 09:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:16.029 63488+0 records in 00:14:16.029 63488+0 records out 00:14:16.029 32505856 bytes (33 MB, 31 MiB) copied, 4.20456 s, 7.7 MB/s 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.029 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.287 [2024-10-11 09:48:00.719587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 [2024-10-11 09:48:00.757458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.287 "name": "raid_bdev1", 00:14:16.287 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:16.287 "strip_size_kb": 0, 00:14:16.287 "state": "online", 00:14:16.287 "raid_level": "raid1", 00:14:16.287 "superblock": true, 00:14:16.287 "num_base_bdevs": 2, 00:14:16.287 "num_base_bdevs_discovered": 1, 00:14:16.287 "num_base_bdevs_operational": 1, 00:14:16.287 "base_bdevs_list": [ 00:14:16.287 { 00:14:16.287 "name": null, 00:14:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.287 "is_configured": false, 00:14:16.287 "data_offset": 0, 00:14:16.287 "data_size": 63488 00:14:16.287 }, 00:14:16.287 { 00:14:16.287 "name": "BaseBdev2", 00:14:16.287 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:16.287 "is_configured": true, 00:14:16.287 "data_offset": 2048, 00:14:16.287 "data_size": 63488 00:14:16.287 } 00:14:16.287 ] 00:14:16.287 }' 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.287 09:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.910 09:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.910 09:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.910 09:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.910 [2024-10-11 09:48:01.240646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.910 [2024-10-11 09:48:01.259959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:16.910 09:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.910 [2024-10-11 09:48:01.262045] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.910 09:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.846 "name": "raid_bdev1", 00:14:17.846 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:17.846 "strip_size_kb": 0, 00:14:17.846 "state": "online", 00:14:17.846 "raid_level": "raid1", 00:14:17.846 "superblock": true, 00:14:17.846 "num_base_bdevs": 2, 00:14:17.846 "num_base_bdevs_discovered": 2, 00:14:17.846 "num_base_bdevs_operational": 2, 00:14:17.846 "process": { 00:14:17.846 "type": "rebuild", 00:14:17.846 "target": "spare", 00:14:17.846 "progress": { 00:14:17.846 "blocks": 20480, 00:14:17.846 "percent": 32 00:14:17.846 } 00:14:17.846 }, 00:14:17.846 "base_bdevs_list": [ 00:14:17.846 { 00:14:17.846 "name": "spare", 00:14:17.846 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:17.846 "is_configured": true, 00:14:17.846 "data_offset": 2048, 00:14:17.846 "data_size": 63488 00:14:17.846 }, 00:14:17.846 { 00:14:17.846 "name": "BaseBdev2", 00:14:17.846 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:17.846 "is_configured": true, 00:14:17.846 "data_offset": 2048, 00:14:17.846 "data_size": 63488 00:14:17.846 } 00:14:17.846 ] 00:14:17.846 }' 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.846 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.846 [2024-10-11 09:48:02.425826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.846 [2024-10-11 09:48:02.468215] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:17.846 [2024-10-11 09:48:02.468401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.846 [2024-10-11 09:48:02.468450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.846 [2024-10-11 09:48:02.468481] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.105 "name": "raid_bdev1", 00:14:18.105 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:18.105 "strip_size_kb": 0, 00:14:18.105 "state": "online", 00:14:18.105 "raid_level": "raid1", 00:14:18.105 "superblock": true, 00:14:18.105 "num_base_bdevs": 2, 00:14:18.105 "num_base_bdevs_discovered": 1, 00:14:18.105 "num_base_bdevs_operational": 1, 00:14:18.105 "base_bdevs_list": [ 00:14:18.105 { 00:14:18.105 "name": null, 00:14:18.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.105 "is_configured": false, 00:14:18.105 "data_offset": 0, 00:14:18.105 "data_size": 63488 00:14:18.105 }, 00:14:18.105 { 00:14:18.105 "name": "BaseBdev2", 00:14:18.105 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:18.105 "is_configured": true, 00:14:18.105 "data_offset": 2048, 00:14:18.105 "data_size": 63488 00:14:18.105 } 00:14:18.105 ] 00:14:18.105 }' 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.105 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.364 "name": "raid_bdev1", 00:14:18.364 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:18.364 "strip_size_kb": 0, 00:14:18.364 "state": "online", 00:14:18.364 "raid_level": "raid1", 00:14:18.364 "superblock": true, 00:14:18.364 "num_base_bdevs": 2, 00:14:18.364 "num_base_bdevs_discovered": 1, 00:14:18.364 "num_base_bdevs_operational": 1, 00:14:18.364 "base_bdevs_list": [ 00:14:18.364 { 00:14:18.364 "name": null, 00:14:18.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.364 "is_configured": false, 00:14:18.364 "data_offset": 0, 00:14:18.364 "data_size": 63488 00:14:18.364 }, 00:14:18.364 { 00:14:18.364 "name": "BaseBdev2", 00:14:18.364 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:18.364 "is_configured": true, 00:14:18.364 "data_offset": 2048, 00:14:18.364 "data_size": 63488 00:14:18.364 } 00:14:18.364 ] 00:14:18.364 }' 00:14:18.364 09:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.624 [2024-10-11 09:48:03.097637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.624 [2024-10-11 09:48:03.117369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.624 09:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:18.624 [2024-10-11 09:48:03.119571] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.566 "name": "raid_bdev1", 00:14:19.566 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:19.566 "strip_size_kb": 0, 00:14:19.566 "state": "online", 00:14:19.566 "raid_level": "raid1", 00:14:19.566 "superblock": true, 00:14:19.566 "num_base_bdevs": 2, 00:14:19.566 "num_base_bdevs_discovered": 2, 00:14:19.566 "num_base_bdevs_operational": 2, 00:14:19.566 "process": { 00:14:19.566 "type": "rebuild", 00:14:19.566 "target": "spare", 00:14:19.566 "progress": { 00:14:19.566 "blocks": 20480, 00:14:19.566 "percent": 32 00:14:19.566 } 00:14:19.566 }, 00:14:19.566 "base_bdevs_list": [ 00:14:19.566 { 00:14:19.566 "name": "spare", 00:14:19.566 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:19.566 "is_configured": true, 00:14:19.566 "data_offset": 2048, 00:14:19.566 "data_size": 63488 00:14:19.566 }, 00:14:19.566 { 00:14:19.566 "name": "BaseBdev2", 00:14:19.566 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:19.566 "is_configured": true, 00:14:19.566 "data_offset": 2048, 00:14:19.566 "data_size": 63488 00:14:19.566 } 00:14:19.566 ] 00:14:19.566 }' 00:14:19.566 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:19.826 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.826 "name": "raid_bdev1", 00:14:19.826 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:19.826 "strip_size_kb": 0, 00:14:19.826 "state": "online", 00:14:19.826 "raid_level": "raid1", 00:14:19.826 "superblock": true, 00:14:19.826 "num_base_bdevs": 2, 00:14:19.826 "num_base_bdevs_discovered": 2, 00:14:19.826 "num_base_bdevs_operational": 2, 00:14:19.826 "process": { 00:14:19.826 "type": "rebuild", 00:14:19.826 "target": "spare", 00:14:19.826 "progress": { 00:14:19.826 "blocks": 22528, 00:14:19.826 "percent": 35 00:14:19.826 } 00:14:19.826 }, 00:14:19.826 "base_bdevs_list": [ 00:14:19.826 { 00:14:19.826 "name": "spare", 00:14:19.826 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:19.826 "is_configured": true, 00:14:19.826 "data_offset": 2048, 00:14:19.826 "data_size": 63488 00:14:19.826 }, 00:14:19.826 { 00:14:19.826 "name": "BaseBdev2", 00:14:19.826 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:19.826 "is_configured": true, 00:14:19.826 "data_offset": 2048, 00:14:19.826 "data_size": 63488 00:14:19.826 } 00:14:19.826 ] 00:14:19.826 }' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.826 09:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.218 "name": "raid_bdev1", 00:14:21.218 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:21.218 "strip_size_kb": 0, 00:14:21.218 "state": "online", 00:14:21.218 "raid_level": "raid1", 00:14:21.218 "superblock": true, 00:14:21.218 "num_base_bdevs": 2, 00:14:21.218 "num_base_bdevs_discovered": 2, 00:14:21.218 "num_base_bdevs_operational": 2, 00:14:21.218 "process": { 00:14:21.218 "type": "rebuild", 00:14:21.218 "target": "spare", 00:14:21.218 "progress": { 00:14:21.218 "blocks": 45056, 00:14:21.218 "percent": 70 00:14:21.218 } 00:14:21.218 }, 00:14:21.218 "base_bdevs_list": [ 00:14:21.218 { 00:14:21.218 "name": "spare", 00:14:21.218 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:21.218 "is_configured": true, 00:14:21.218 "data_offset": 2048, 00:14:21.218 "data_size": 63488 00:14:21.218 }, 00:14:21.218 { 00:14:21.218 "name": "BaseBdev2", 00:14:21.218 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:21.218 "is_configured": true, 00:14:21.218 "data_offset": 2048, 00:14:21.218 "data_size": 63488 00:14:21.218 } 00:14:21.218 ] 00:14:21.218 }' 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.218 09:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.789 [2024-10-11 09:48:06.235189] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:21.789 [2024-10-11 09:48:06.235397] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:21.789 [2024-10-11 09:48:06.235569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.048 "name": "raid_bdev1", 00:14:22.048 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:22.048 "strip_size_kb": 0, 00:14:22.048 "state": "online", 00:14:22.048 "raid_level": "raid1", 00:14:22.048 "superblock": true, 00:14:22.048 "num_base_bdevs": 2, 00:14:22.048 "num_base_bdevs_discovered": 2, 00:14:22.048 "num_base_bdevs_operational": 2, 00:14:22.048 "base_bdevs_list": [ 00:14:22.048 { 00:14:22.048 "name": "spare", 00:14:22.048 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:22.048 "is_configured": true, 00:14:22.048 "data_offset": 2048, 00:14:22.048 "data_size": 63488 00:14:22.048 }, 00:14:22.048 { 00:14:22.048 "name": "BaseBdev2", 00:14:22.048 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:22.048 "is_configured": true, 00:14:22.048 "data_offset": 2048, 00:14:22.048 "data_size": 63488 00:14:22.048 } 00:14:22.048 ] 00:14:22.048 }' 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.048 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.307 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.307 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.307 "name": "raid_bdev1", 00:14:22.307 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:22.307 "strip_size_kb": 0, 00:14:22.307 "state": "online", 00:14:22.307 "raid_level": "raid1", 00:14:22.307 "superblock": true, 00:14:22.307 "num_base_bdevs": 2, 00:14:22.307 "num_base_bdevs_discovered": 2, 00:14:22.307 "num_base_bdevs_operational": 2, 00:14:22.307 "base_bdevs_list": [ 00:14:22.307 { 00:14:22.307 "name": "spare", 00:14:22.307 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:22.307 "is_configured": true, 00:14:22.308 "data_offset": 2048, 00:14:22.308 "data_size": 63488 00:14:22.308 }, 00:14:22.308 { 00:14:22.308 "name": "BaseBdev2", 00:14:22.308 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:22.308 "is_configured": true, 00:14:22.308 "data_offset": 2048, 00:14:22.308 "data_size": 63488 00:14:22.308 } 00:14:22.308 ] 00:14:22.308 }' 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.308 "name": "raid_bdev1", 00:14:22.308 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:22.308 "strip_size_kb": 0, 00:14:22.308 "state": "online", 00:14:22.308 "raid_level": "raid1", 00:14:22.308 "superblock": true, 00:14:22.308 "num_base_bdevs": 2, 00:14:22.308 "num_base_bdevs_discovered": 2, 00:14:22.308 "num_base_bdevs_operational": 2, 00:14:22.308 "base_bdevs_list": [ 00:14:22.308 { 00:14:22.308 "name": "spare", 00:14:22.308 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:22.308 "is_configured": true, 00:14:22.308 "data_offset": 2048, 00:14:22.308 "data_size": 63488 00:14:22.308 }, 00:14:22.308 { 00:14:22.308 "name": "BaseBdev2", 00:14:22.308 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:22.308 "is_configured": true, 00:14:22.308 "data_offset": 2048, 00:14:22.308 "data_size": 63488 00:14:22.308 } 00:14:22.308 ] 00:14:22.308 }' 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.308 09:48:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.877 [2024-10-11 09:48:07.278034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.877 [2024-10-11 09:48:07.278134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.877 [2024-10-11 09:48:07.278255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.877 [2024-10-11 09:48:07.278337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.877 [2024-10-11 09:48:07.278351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:22.877 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:23.135 /dev/nbd0 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.135 1+0 records in 00:14:23.135 1+0 records out 00:14:23.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596844 s, 6.9 MB/s 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.135 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.136 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:23.395 /dev/nbd1 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.395 1+0 records in 00:14:23.395 1+0 records out 00:14:23.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318571 s, 12.9 MB/s 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.395 09:48:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.654 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.913 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.172 [2024-10-11 09:48:08.573884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.172 [2024-10-11 09:48:08.574379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.172 [2024-10-11 09:48:08.574484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:24.172 [2024-10-11 09:48:08.574567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.172 [2024-10-11 09:48:08.576943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.172 [2024-10-11 09:48:08.577115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.172 [2024-10-11 09:48:08.577287] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:24.172 [2024-10-11 09:48:08.577361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.172 [2024-10-11 09:48:08.577533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.172 spare 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.172 [2024-10-11 09:48:08.677441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:24.172 [2024-10-11 09:48:08.677483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.172 [2024-10-11 09:48:08.677909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:24.172 [2024-10-11 09:48:08.678165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:24.172 [2024-10-11 09:48:08.678210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:24.172 [2024-10-11 09:48:08.678455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.172 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.172 "name": "raid_bdev1", 00:14:24.172 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:24.172 "strip_size_kb": 0, 00:14:24.172 "state": "online", 00:14:24.172 "raid_level": "raid1", 00:14:24.172 "superblock": true, 00:14:24.172 "num_base_bdevs": 2, 00:14:24.172 "num_base_bdevs_discovered": 2, 00:14:24.172 "num_base_bdevs_operational": 2, 00:14:24.172 "base_bdevs_list": [ 00:14:24.172 { 00:14:24.172 "name": "spare", 00:14:24.172 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:24.172 "is_configured": true, 00:14:24.172 "data_offset": 2048, 00:14:24.173 "data_size": 63488 00:14:24.173 }, 00:14:24.173 { 00:14:24.173 "name": "BaseBdev2", 00:14:24.173 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:24.173 "is_configured": true, 00:14:24.173 "data_offset": 2048, 00:14:24.173 "data_size": 63488 00:14:24.173 } 00:14:24.173 ] 00:14:24.173 }' 00:14:24.173 09:48:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.173 09:48:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.747 "name": "raid_bdev1", 00:14:24.747 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:24.747 "strip_size_kb": 0, 00:14:24.747 "state": "online", 00:14:24.747 "raid_level": "raid1", 00:14:24.747 "superblock": true, 00:14:24.747 "num_base_bdevs": 2, 00:14:24.747 "num_base_bdevs_discovered": 2, 00:14:24.747 "num_base_bdevs_operational": 2, 00:14:24.747 "base_bdevs_list": [ 00:14:24.747 { 00:14:24.747 "name": "spare", 00:14:24.747 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:24.747 "is_configured": true, 00:14:24.747 "data_offset": 2048, 00:14:24.747 "data_size": 63488 00:14:24.747 }, 00:14:24.747 { 00:14:24.747 "name": "BaseBdev2", 00:14:24.747 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:24.747 "is_configured": true, 00:14:24.747 "data_offset": 2048, 00:14:24.747 "data_size": 63488 00:14:24.747 } 00:14:24.747 ] 00:14:24.747 }' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.747 [2024-10-11 09:48:09.289429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.747 "name": "raid_bdev1", 00:14:24.747 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:24.747 "strip_size_kb": 0, 00:14:24.747 "state": "online", 00:14:24.747 "raid_level": "raid1", 00:14:24.747 "superblock": true, 00:14:24.747 "num_base_bdevs": 2, 00:14:24.747 "num_base_bdevs_discovered": 1, 00:14:24.747 "num_base_bdevs_operational": 1, 00:14:24.747 "base_bdevs_list": [ 00:14:24.747 { 00:14:24.747 "name": null, 00:14:24.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.747 "is_configured": false, 00:14:24.747 "data_offset": 0, 00:14:24.747 "data_size": 63488 00:14:24.747 }, 00:14:24.747 { 00:14:24.747 "name": "BaseBdev2", 00:14:24.747 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:24.747 "is_configured": true, 00:14:24.747 "data_offset": 2048, 00:14:24.747 "data_size": 63488 00:14:24.747 } 00:14:24.747 ] 00:14:24.747 }' 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.747 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.316 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.316 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.316 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.316 [2024-10-11 09:48:09.704815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.316 [2024-10-11 09:48:09.705071] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:25.316 [2024-10-11 09:48:09.705145] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:25.316 [2024-10-11 09:48:09.705550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.316 [2024-10-11 09:48:09.724337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:25.316 09:48:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.316 09:48:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:25.316 [2024-10-11 09:48:09.726426] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.255 "name": "raid_bdev1", 00:14:26.255 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:26.255 "strip_size_kb": 0, 00:14:26.255 "state": "online", 00:14:26.255 "raid_level": "raid1", 00:14:26.255 "superblock": true, 00:14:26.255 "num_base_bdevs": 2, 00:14:26.255 "num_base_bdevs_discovered": 2, 00:14:26.255 "num_base_bdevs_operational": 2, 00:14:26.255 "process": { 00:14:26.255 "type": "rebuild", 00:14:26.255 "target": "spare", 00:14:26.255 "progress": { 00:14:26.255 "blocks": 20480, 00:14:26.255 "percent": 32 00:14:26.255 } 00:14:26.255 }, 00:14:26.255 "base_bdevs_list": [ 00:14:26.255 { 00:14:26.255 "name": "spare", 00:14:26.255 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:26.255 "is_configured": true, 00:14:26.255 "data_offset": 2048, 00:14:26.255 "data_size": 63488 00:14:26.255 }, 00:14:26.255 { 00:14:26.255 "name": "BaseBdev2", 00:14:26.255 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:26.255 "is_configured": true, 00:14:26.255 "data_offset": 2048, 00:14:26.255 "data_size": 63488 00:14:26.255 } 00:14:26.255 ] 00:14:26.255 }' 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.255 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.515 [2024-10-11 09:48:10.897610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.515 [2024-10-11 09:48:10.932603] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.515 [2024-10-11 09:48:10.933224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.515 [2024-10-11 09:48:10.933289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.515 [2024-10-11 09:48:10.933307] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.515 09:48:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.515 09:48:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.515 "name": "raid_bdev1", 00:14:26.515 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:26.515 "strip_size_kb": 0, 00:14:26.515 "state": "online", 00:14:26.515 "raid_level": "raid1", 00:14:26.515 "superblock": true, 00:14:26.515 "num_base_bdevs": 2, 00:14:26.515 "num_base_bdevs_discovered": 1, 00:14:26.515 "num_base_bdevs_operational": 1, 00:14:26.515 "base_bdevs_list": [ 00:14:26.515 { 00:14:26.515 "name": null, 00:14:26.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.515 "is_configured": false, 00:14:26.515 "data_offset": 0, 00:14:26.515 "data_size": 63488 00:14:26.515 }, 00:14:26.515 { 00:14:26.515 "name": "BaseBdev2", 00:14:26.515 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:26.515 "is_configured": true, 00:14:26.515 "data_offset": 2048, 00:14:26.515 "data_size": 63488 00:14:26.515 } 00:14:26.515 ] 00:14:26.515 }' 00:14:26.515 09:48:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.515 09:48:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.775 09:48:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.775 09:48:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.775 09:48:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.775 [2024-10-11 09:48:11.378979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.775 [2024-10-11 09:48:11.379291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.775 [2024-10-11 09:48:11.379416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:26.775 [2024-10-11 09:48:11.379506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.775 [2024-10-11 09:48:11.380186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.775 [2024-10-11 09:48:11.380337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.775 [2024-10-11 09:48:11.380538] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.775 [2024-10-11 09:48:11.380598] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:26.775 [2024-10-11 09:48:11.380646] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.775 [2024-10-11 09:48:11.380730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.775 [2024-10-11 09:48:11.399768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:26.775 spare 00:14:26.775 09:48:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.775 09:48:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:26.775 [2024-10-11 09:48:11.401990] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.155 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.155 "name": "raid_bdev1", 00:14:28.155 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:28.155 "strip_size_kb": 0, 00:14:28.155 "state": "online", 00:14:28.155 "raid_level": "raid1", 00:14:28.155 "superblock": true, 00:14:28.155 "num_base_bdevs": 2, 00:14:28.155 "num_base_bdevs_discovered": 2, 00:14:28.155 "num_base_bdevs_operational": 2, 00:14:28.155 "process": { 00:14:28.155 "type": "rebuild", 00:14:28.155 "target": "spare", 00:14:28.155 "progress": { 00:14:28.155 "blocks": 20480, 00:14:28.155 "percent": 32 00:14:28.155 } 00:14:28.155 }, 00:14:28.155 "base_bdevs_list": [ 00:14:28.155 { 00:14:28.155 "name": "spare", 00:14:28.155 "uuid": "93704d42-0b0a-5c69-a90b-71dd95b613f6", 00:14:28.155 "is_configured": true, 00:14:28.155 "data_offset": 2048, 00:14:28.155 "data_size": 63488 00:14:28.155 }, 00:14:28.156 { 00:14:28.156 "name": "BaseBdev2", 00:14:28.156 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:28.156 "is_configured": true, 00:14:28.156 "data_offset": 2048, 00:14:28.156 "data_size": 63488 00:14:28.156 } 00:14:28.156 ] 00:14:28.156 }' 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.156 [2024-10-11 09:48:12.533250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.156 [2024-10-11 09:48:12.608219] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.156 [2024-10-11 09:48:12.608308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.156 [2024-10-11 09:48:12.608330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.156 [2024-10-11 09:48:12.608338] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.156 "name": "raid_bdev1", 00:14:28.156 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:28.156 "strip_size_kb": 0, 00:14:28.156 "state": "online", 00:14:28.156 "raid_level": "raid1", 00:14:28.156 "superblock": true, 00:14:28.156 "num_base_bdevs": 2, 00:14:28.156 "num_base_bdevs_discovered": 1, 00:14:28.156 "num_base_bdevs_operational": 1, 00:14:28.156 "base_bdevs_list": [ 00:14:28.156 { 00:14:28.156 "name": null, 00:14:28.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.156 "is_configured": false, 00:14:28.156 "data_offset": 0, 00:14:28.156 "data_size": 63488 00:14:28.156 }, 00:14:28.156 { 00:14:28.156 "name": "BaseBdev2", 00:14:28.156 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:28.156 "is_configured": true, 00:14:28.156 "data_offset": 2048, 00:14:28.156 "data_size": 63488 00:14:28.156 } 00:14:28.156 ] 00:14:28.156 }' 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.156 09:48:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.769 "name": "raid_bdev1", 00:14:28.769 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:28.769 "strip_size_kb": 0, 00:14:28.769 "state": "online", 00:14:28.769 "raid_level": "raid1", 00:14:28.769 "superblock": true, 00:14:28.769 "num_base_bdevs": 2, 00:14:28.769 "num_base_bdevs_discovered": 1, 00:14:28.769 "num_base_bdevs_operational": 1, 00:14:28.769 "base_bdevs_list": [ 00:14:28.769 { 00:14:28.769 "name": null, 00:14:28.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.769 "is_configured": false, 00:14:28.769 "data_offset": 0, 00:14:28.769 "data_size": 63488 00:14:28.769 }, 00:14:28.769 { 00:14:28.769 "name": "BaseBdev2", 00:14:28.769 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:28.769 "is_configured": true, 00:14:28.769 "data_offset": 2048, 00:14:28.769 "data_size": 63488 00:14:28.769 } 00:14:28.769 ] 00:14:28.769 }' 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.769 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.769 [2024-10-11 09:48:13.251274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:28.769 [2024-10-11 09:48:13.251343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.769 [2024-10-11 09:48:13.251382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:28.769 [2024-10-11 09:48:13.251393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.769 [2024-10-11 09:48:13.251964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.769 [2024-10-11 09:48:13.251986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.769 [2024-10-11 09:48:13.252088] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:28.769 [2024-10-11 09:48:13.252111] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:28.769 [2024-10-11 09:48:13.252123] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:28.769 [2024-10-11 09:48:13.252135] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:28.769 BaseBdev1 00:14:28.770 09:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.770 09:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.708 "name": "raid_bdev1", 00:14:29.708 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:29.708 "strip_size_kb": 0, 00:14:29.708 "state": "online", 00:14:29.708 "raid_level": "raid1", 00:14:29.708 "superblock": true, 00:14:29.708 "num_base_bdevs": 2, 00:14:29.708 "num_base_bdevs_discovered": 1, 00:14:29.708 "num_base_bdevs_operational": 1, 00:14:29.708 "base_bdevs_list": [ 00:14:29.708 { 00:14:29.708 "name": null, 00:14:29.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.708 "is_configured": false, 00:14:29.708 "data_offset": 0, 00:14:29.708 "data_size": 63488 00:14:29.708 }, 00:14:29.708 { 00:14:29.708 "name": "BaseBdev2", 00:14:29.708 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:29.708 "is_configured": true, 00:14:29.708 "data_offset": 2048, 00:14:29.708 "data_size": 63488 00:14:29.708 } 00:14:29.708 ] 00:14:29.708 }' 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.708 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.277 "name": "raid_bdev1", 00:14:30.277 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:30.277 "strip_size_kb": 0, 00:14:30.277 "state": "online", 00:14:30.277 "raid_level": "raid1", 00:14:30.277 "superblock": true, 00:14:30.277 "num_base_bdevs": 2, 00:14:30.277 "num_base_bdevs_discovered": 1, 00:14:30.277 "num_base_bdevs_operational": 1, 00:14:30.277 "base_bdevs_list": [ 00:14:30.277 { 00:14:30.277 "name": null, 00:14:30.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.277 "is_configured": false, 00:14:30.277 "data_offset": 0, 00:14:30.277 "data_size": 63488 00:14:30.277 }, 00:14:30.277 { 00:14:30.277 "name": "BaseBdev2", 00:14:30.277 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:30.277 "is_configured": true, 00:14:30.277 "data_offset": 2048, 00:14:30.277 "data_size": 63488 00:14:30.277 } 00:14:30.277 ] 00:14:30.277 }' 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.277 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.278 [2024-10-11 09:48:14.828669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.278 [2024-10-11 09:48:14.828873] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:30.278 [2024-10-11 09:48:14.828893] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.278 request: 00:14:30.278 { 00:14:30.278 "base_bdev": "BaseBdev1", 00:14:30.278 "raid_bdev": "raid_bdev1", 00:14:30.278 "method": "bdev_raid_add_base_bdev", 00:14:30.278 "req_id": 1 00:14:30.278 } 00:14:30.278 Got JSON-RPC error response 00:14:30.278 response: 00:14:30.278 { 00:14:30.278 "code": -22, 00:14:30.278 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:30.278 } 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.278 09:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.217 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.476 09:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.476 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.476 "name": "raid_bdev1", 00:14:31.476 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:31.476 "strip_size_kb": 0, 00:14:31.476 "state": "online", 00:14:31.476 "raid_level": "raid1", 00:14:31.476 "superblock": true, 00:14:31.476 "num_base_bdevs": 2, 00:14:31.476 "num_base_bdevs_discovered": 1, 00:14:31.476 "num_base_bdevs_operational": 1, 00:14:31.476 "base_bdevs_list": [ 00:14:31.476 { 00:14:31.476 "name": null, 00:14:31.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.476 "is_configured": false, 00:14:31.476 "data_offset": 0, 00:14:31.476 "data_size": 63488 00:14:31.476 }, 00:14:31.476 { 00:14:31.476 "name": "BaseBdev2", 00:14:31.476 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:31.476 "is_configured": true, 00:14:31.476 "data_offset": 2048, 00:14:31.476 "data_size": 63488 00:14:31.476 } 00:14:31.476 ] 00:14:31.476 }' 00:14:31.476 09:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.476 09:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.735 "name": "raid_bdev1", 00:14:31.735 "uuid": "38cd83a0-f39e-456e-9067-ca9f3d14c03c", 00:14:31.735 "strip_size_kb": 0, 00:14:31.735 "state": "online", 00:14:31.735 "raid_level": "raid1", 00:14:31.735 "superblock": true, 00:14:31.735 "num_base_bdevs": 2, 00:14:31.735 "num_base_bdevs_discovered": 1, 00:14:31.735 "num_base_bdevs_operational": 1, 00:14:31.735 "base_bdevs_list": [ 00:14:31.735 { 00:14:31.735 "name": null, 00:14:31.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.735 "is_configured": false, 00:14:31.735 "data_offset": 0, 00:14:31.735 "data_size": 63488 00:14:31.735 }, 00:14:31.735 { 00:14:31.735 "name": "BaseBdev2", 00:14:31.735 "uuid": "3e2b6e33-cd0f-5370-b1ee-084443cac412", 00:14:31.735 "is_configured": true, 00:14:31.735 "data_offset": 2048, 00:14:31.735 "data_size": 63488 00:14:31.735 } 00:14:31.735 ] 00:14:31.735 }' 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.735 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76286 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76286 ']' 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 76286 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76286 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:31.995 killing process with pid 76286 00:14:31.995 Received shutdown signal, test time was about 60.000000 seconds 00:14:31.995 00:14:31.995 Latency(us) 00:14:31.995 [2024-10-11T09:48:16.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.995 [2024-10-11T09:48:16.627Z] =================================================================================================================== 00:14:31.995 [2024-10-11T09:48:16.627Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76286' 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 76286 00:14:31.995 [2024-10-11 09:48:16.447438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.995 [2024-10-11 09:48:16.447582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.995 09:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 76286 00:14:31.995 [2024-10-11 09:48:16.447640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.995 [2024-10-11 09:48:16.447653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:32.254 [2024-10-11 09:48:16.749919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.257 09:48:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:33.257 00:14:33.257 real 0m23.601s 00:14:33.257 user 0m28.717s 00:14:33.257 sys 0m3.974s 00:14:33.257 09:48:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.257 09:48:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.257 ************************************ 00:14:33.257 END TEST raid_rebuild_test_sb 00:14:33.257 ************************************ 00:14:33.517 09:48:17 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:33.517 09:48:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:33.517 09:48:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.517 09:48:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.517 ************************************ 00:14:33.517 START TEST raid_rebuild_test_io 00:14:33.517 ************************************ 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77017 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77017 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 77017 ']' 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.517 09:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.517 [2024-10-11 09:48:18.025300] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:33.517 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.517 Zero copy mechanism will not be used. 00:14:33.517 [2024-10-11 09:48:18.025537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77017 ] 00:14:33.776 [2024-10-11 09:48:18.188342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.776 [2024-10-11 09:48:18.325074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.035 [2024-10-11 09:48:18.575458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.035 [2024-10-11 09:48:18.575508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 BaseBdev1_malloc 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 [2024-10-11 09:48:19.000730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.603 [2024-10-11 09:48:19.000818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.603 [2024-10-11 09:48:19.000846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.603 [2024-10-11 09:48:19.000860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.603 [2024-10-11 09:48:19.003081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.603 [2024-10-11 09:48:19.003199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.603 BaseBdev1 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 BaseBdev2_malloc 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 [2024-10-11 09:48:19.060740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.603 [2024-10-11 09:48:19.060817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.603 [2024-10-11 09:48:19.060839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.603 [2024-10-11 09:48:19.060854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.603 [2024-10-11 09:48:19.063208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.603 [2024-10-11 09:48:19.063254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.603 BaseBdev2 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 spare_malloc 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 spare_delay 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 [2024-10-11 09:48:19.148214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.603 [2024-10-11 09:48:19.148285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.603 [2024-10-11 09:48:19.148309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:34.603 [2024-10-11 09:48:19.148322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.603 [2024-10-11 09:48:19.150584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.603 [2024-10-11 09:48:19.150626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.603 spare 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 [2024-10-11 09:48:19.160242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.603 [2024-10-11 09:48:19.162486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.603 [2024-10-11 09:48:19.162596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:34.603 [2024-10-11 09:48:19.162612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:34.603 [2024-10-11 09:48:19.162950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:34.603 [2024-10-11 09:48:19.163169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:34.603 [2024-10-11 09:48:19.163196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:34.603 [2024-10-11 09:48:19.163453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.603 "name": "raid_bdev1", 00:14:34.603 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:34.603 "strip_size_kb": 0, 00:14:34.603 "state": "online", 00:14:34.603 "raid_level": "raid1", 00:14:34.603 "superblock": false, 00:14:34.603 "num_base_bdevs": 2, 00:14:34.603 "num_base_bdevs_discovered": 2, 00:14:34.603 "num_base_bdevs_operational": 2, 00:14:34.603 "base_bdevs_list": [ 00:14:34.603 { 00:14:34.603 "name": "BaseBdev1", 00:14:34.603 "uuid": "d5cd82c1-19eb-5c5d-9774-d338958d5f94", 00:14:34.603 "is_configured": true, 00:14:34.603 "data_offset": 0, 00:14:34.603 "data_size": 65536 00:14:34.603 }, 00:14:34.603 { 00:14:34.603 "name": "BaseBdev2", 00:14:34.603 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:34.603 "is_configured": true, 00:14:34.603 "data_offset": 0, 00:14:34.603 "data_size": 65536 00:14:34.603 } 00:14:34.603 ] 00:14:34.603 }' 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.603 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.171 [2024-10-11 09:48:19.639824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.171 [2024-10-11 09:48:19.735339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.171 "name": "raid_bdev1", 00:14:35.171 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:35.171 "strip_size_kb": 0, 00:14:35.171 "state": "online", 00:14:35.171 "raid_level": "raid1", 00:14:35.171 "superblock": false, 00:14:35.171 "num_base_bdevs": 2, 00:14:35.171 "num_base_bdevs_discovered": 1, 00:14:35.171 "num_base_bdevs_operational": 1, 00:14:35.171 "base_bdevs_list": [ 00:14:35.171 { 00:14:35.171 "name": null, 00:14:35.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.171 "is_configured": false, 00:14:35.171 "data_offset": 0, 00:14:35.171 "data_size": 65536 00:14:35.171 }, 00:14:35.171 { 00:14:35.171 "name": "BaseBdev2", 00:14:35.171 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:35.171 "is_configured": true, 00:14:35.171 "data_offset": 0, 00:14:35.171 "data_size": 65536 00:14:35.171 } 00:14:35.171 ] 00:14:35.171 }' 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.171 09:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.430 [2024-10-11 09:48:19.844511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:35.430 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.430 Zero copy mechanism will not be used. 00:14:35.430 Running I/O for 60 seconds... 00:14:35.688 09:48:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:35.688 09:48:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.688 09:48:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.688 [2024-10-11 09:48:20.206846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.688 09:48:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.688 09:48:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:35.688 [2024-10-11 09:48:20.268995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:35.688 [2024-10-11 09:48:20.271056] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.947 [2024-10-11 09:48:20.391012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:35.947 [2024-10-11 09:48:20.391762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.205 [2024-10-11 09:48:20.599798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.205 [2024-10-11 09:48:20.600168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.464 199.00 IOPS, 597.00 MiB/s [2024-10-11T09:48:21.096Z] [2024-10-11 09:48:20.953801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:36.723 [2024-10-11 09:48:21.187283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:36.723 [2024-10-11 09:48:21.187717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.723 "name": "raid_bdev1", 00:14:36.723 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:36.723 "strip_size_kb": 0, 00:14:36.723 "state": "online", 00:14:36.723 "raid_level": "raid1", 00:14:36.723 "superblock": false, 00:14:36.723 "num_base_bdevs": 2, 00:14:36.723 "num_base_bdevs_discovered": 2, 00:14:36.723 "num_base_bdevs_operational": 2, 00:14:36.723 "process": { 00:14:36.723 "type": "rebuild", 00:14:36.723 "target": "spare", 00:14:36.723 "progress": { 00:14:36.723 "blocks": 10240, 00:14:36.723 "percent": 15 00:14:36.723 } 00:14:36.723 }, 00:14:36.723 "base_bdevs_list": [ 00:14:36.723 { 00:14:36.723 "name": "spare", 00:14:36.723 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:36.723 "is_configured": true, 00:14:36.723 "data_offset": 0, 00:14:36.723 "data_size": 65536 00:14:36.723 }, 00:14:36.723 { 00:14:36.723 "name": "BaseBdev2", 00:14:36.723 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:36.723 "is_configured": true, 00:14:36.723 "data_offset": 0, 00:14:36.723 "data_size": 65536 00:14:36.723 } 00:14:36.723 ] 00:14:36.723 }' 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.723 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.982 [2024-10-11 09:48:21.412662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.982 [2024-10-11 09:48:21.531113] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:36.982 [2024-10-11 09:48:21.533710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.982 [2024-10-11 09:48:21.533831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.982 [2024-10-11 09:48:21.533850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.982 [2024-10-11 09:48:21.579936] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.982 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.241 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.241 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.241 "name": "raid_bdev1", 00:14:37.241 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:37.241 "strip_size_kb": 0, 00:14:37.241 "state": "online", 00:14:37.241 "raid_level": "raid1", 00:14:37.241 "superblock": false, 00:14:37.241 "num_base_bdevs": 2, 00:14:37.241 "num_base_bdevs_discovered": 1, 00:14:37.241 "num_base_bdevs_operational": 1, 00:14:37.241 "base_bdevs_list": [ 00:14:37.241 { 00:14:37.241 "name": null, 00:14:37.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.241 "is_configured": false, 00:14:37.241 "data_offset": 0, 00:14:37.241 "data_size": 65536 00:14:37.241 }, 00:14:37.241 { 00:14:37.241 "name": "BaseBdev2", 00:14:37.241 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:37.241 "is_configured": true, 00:14:37.241 "data_offset": 0, 00:14:37.241 "data_size": 65536 00:14:37.241 } 00:14:37.241 ] 00:14:37.241 }' 00:14:37.241 09:48:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.241 09:48:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.499 152.00 IOPS, 456.00 MiB/s [2024-10-11T09:48:22.131Z] 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.499 "name": "raid_bdev1", 00:14:37.499 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:37.499 "strip_size_kb": 0, 00:14:37.499 "state": "online", 00:14:37.499 "raid_level": "raid1", 00:14:37.499 "superblock": false, 00:14:37.499 "num_base_bdevs": 2, 00:14:37.499 "num_base_bdevs_discovered": 1, 00:14:37.499 "num_base_bdevs_operational": 1, 00:14:37.499 "base_bdevs_list": [ 00:14:37.499 { 00:14:37.499 "name": null, 00:14:37.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.499 "is_configured": false, 00:14:37.499 "data_offset": 0, 00:14:37.499 "data_size": 65536 00:14:37.499 }, 00:14:37.499 { 00:14:37.499 "name": "BaseBdev2", 00:14:37.499 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:37.499 "is_configured": true, 00:14:37.499 "data_offset": 0, 00:14:37.499 "data_size": 65536 00:14:37.499 } 00:14:37.499 ] 00:14:37.499 }' 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.499 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.757 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.757 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.757 09:48:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.757 09:48:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.757 [2024-10-11 09:48:22.166533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.757 09:48:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.757 09:48:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:37.757 [2024-10-11 09:48:22.206552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:37.757 [2024-10-11 09:48:22.208678] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.757 [2024-10-11 09:48:22.335484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.015 [2024-10-11 09:48:22.458327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.015 [2024-10-11 09:48:22.458851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.273 [2024-10-11 09:48:22.798004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:38.531 156.33 IOPS, 469.00 MiB/s [2024-10-11T09:48:23.163Z] [2024-10-11 09:48:23.011919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.790 "name": "raid_bdev1", 00:14:38.790 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:38.790 "strip_size_kb": 0, 00:14:38.790 "state": "online", 00:14:38.790 "raid_level": "raid1", 00:14:38.790 "superblock": false, 00:14:38.790 "num_base_bdevs": 2, 00:14:38.790 "num_base_bdevs_discovered": 2, 00:14:38.790 "num_base_bdevs_operational": 2, 00:14:38.790 "process": { 00:14:38.790 "type": "rebuild", 00:14:38.790 "target": "spare", 00:14:38.790 "progress": { 00:14:38.790 "blocks": 10240, 00:14:38.790 "percent": 15 00:14:38.790 } 00:14:38.790 }, 00:14:38.790 "base_bdevs_list": [ 00:14:38.790 { 00:14:38.790 "name": "spare", 00:14:38.790 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:38.790 "is_configured": true, 00:14:38.790 "data_offset": 0, 00:14:38.790 "data_size": 65536 00:14:38.790 }, 00:14:38.790 { 00:14:38.790 "name": "BaseBdev2", 00:14:38.790 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:38.790 "is_configured": true, 00:14:38.790 "data_offset": 0, 00:14:38.790 "data_size": 65536 00:14:38.790 } 00:14:38.790 ] 00:14:38.790 }' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.790 [2024-10-11 09:48:23.340122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:38.790 [2024-10-11 09:48:23.340861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.790 "name": "raid_bdev1", 00:14:38.790 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:38.790 "strip_size_kb": 0, 00:14:38.790 "state": "online", 00:14:38.790 "raid_level": "raid1", 00:14:38.790 "superblock": false, 00:14:38.790 "num_base_bdevs": 2, 00:14:38.790 "num_base_bdevs_discovered": 2, 00:14:38.790 "num_base_bdevs_operational": 2, 00:14:38.790 "process": { 00:14:38.790 "type": "rebuild", 00:14:38.790 "target": "spare", 00:14:38.790 "progress": { 00:14:38.790 "blocks": 14336, 00:14:38.790 "percent": 21 00:14:38.790 } 00:14:38.790 }, 00:14:38.790 "base_bdevs_list": [ 00:14:38.790 { 00:14:38.790 "name": "spare", 00:14:38.790 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:38.790 "is_configured": true, 00:14:38.790 "data_offset": 0, 00:14:38.790 "data_size": 65536 00:14:38.790 }, 00:14:38.790 { 00:14:38.790 "name": "BaseBdev2", 00:14:38.790 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:38.790 "is_configured": true, 00:14:38.790 "data_offset": 0, 00:14:38.790 "data_size": 65536 00:14:38.790 } 00:14:38.790 ] 00:14:38.790 }' 00:14:38.790 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.048 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.049 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.049 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.049 09:48:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.049 [2024-10-11 09:48:23.552011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:39.307 135.75 IOPS, 407.25 MiB/s [2024-10-11T09:48:23.939Z] [2024-10-11 09:48:23.911454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:39.565 [2024-10-11 09:48:24.134059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:39.823 [2024-10-11 09:48:24.363630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:40.082 [2024-10-11 09:48:24.479637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:40.082 [2024-10-11 09:48:24.480160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.082 "name": "raid_bdev1", 00:14:40.082 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:40.082 "strip_size_kb": 0, 00:14:40.082 "state": "online", 00:14:40.082 "raid_level": "raid1", 00:14:40.082 "superblock": false, 00:14:40.082 "num_base_bdevs": 2, 00:14:40.082 "num_base_bdevs_discovered": 2, 00:14:40.082 "num_base_bdevs_operational": 2, 00:14:40.082 "process": { 00:14:40.082 "type": "rebuild", 00:14:40.082 "target": "spare", 00:14:40.082 "progress": { 00:14:40.082 "blocks": 28672, 00:14:40.082 "percent": 43 00:14:40.082 } 00:14:40.082 }, 00:14:40.082 "base_bdevs_list": [ 00:14:40.082 { 00:14:40.082 "name": "spare", 00:14:40.082 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:40.082 "is_configured": true, 00:14:40.082 "data_offset": 0, 00:14:40.082 "data_size": 65536 00:14:40.082 }, 00:14:40.082 { 00:14:40.082 "name": "BaseBdev2", 00:14:40.082 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:40.082 "is_configured": true, 00:14:40.082 "data_offset": 0, 00:14:40.082 "data_size": 65536 00:14:40.082 } 00:14:40.082 ] 00:14:40.082 }' 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.082 09:48:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.341 121.40 IOPS, 364.20 MiB/s [2024-10-11T09:48:24.973Z] [2024-10-11 09:48:24.946261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:40.341 [2024-10-11 09:48:24.946595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:40.909 [2024-10-11 09:48:25.277240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:40.909 [2024-10-11 09:48:25.478876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:40.909 [2024-10-11 09:48:25.479354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.168 "name": "raid_bdev1", 00:14:41.168 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:41.168 "strip_size_kb": 0, 00:14:41.168 "state": "online", 00:14:41.168 "raid_level": "raid1", 00:14:41.168 "superblock": false, 00:14:41.168 "num_base_bdevs": 2, 00:14:41.168 "num_base_bdevs_discovered": 2, 00:14:41.168 "num_base_bdevs_operational": 2, 00:14:41.168 "process": { 00:14:41.168 "type": "rebuild", 00:14:41.168 "target": "spare", 00:14:41.168 "progress": { 00:14:41.168 "blocks": 40960, 00:14:41.168 "percent": 62 00:14:41.168 } 00:14:41.168 }, 00:14:41.168 "base_bdevs_list": [ 00:14:41.168 { 00:14:41.168 "name": "spare", 00:14:41.168 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:41.168 "is_configured": true, 00:14:41.168 "data_offset": 0, 00:14:41.168 "data_size": 65536 00:14:41.168 }, 00:14:41.168 { 00:14:41.168 "name": "BaseBdev2", 00:14:41.168 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:41.168 "is_configured": true, 00:14:41.168 "data_offset": 0, 00:14:41.168 "data_size": 65536 00:14:41.168 } 00:14:41.168 ] 00:14:41.168 }' 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.168 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.427 [2024-10-11 09:48:25.806996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:41.427 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.427 09:48:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.686 108.83 IOPS, 326.50 MiB/s [2024-10-11T09:48:26.318Z] [2024-10-11 09:48:26.142460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:41.686 [2024-10-11 09:48:26.143064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:41.944 [2024-10-11 09:48:26.488839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:42.203 [2024-10-11 09:48:26.712426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.462 98.71 IOPS, 296.14 MiB/s [2024-10-11T09:48:27.094Z] 09:48:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.462 "name": "raid_bdev1", 00:14:42.462 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:42.462 "strip_size_kb": 0, 00:14:42.462 "state": "online", 00:14:42.462 "raid_level": "raid1", 00:14:42.462 "superblock": false, 00:14:42.462 "num_base_bdevs": 2, 00:14:42.462 "num_base_bdevs_discovered": 2, 00:14:42.462 "num_base_bdevs_operational": 2, 00:14:42.462 "process": { 00:14:42.462 "type": "rebuild", 00:14:42.462 "target": "spare", 00:14:42.462 "progress": { 00:14:42.462 "blocks": 59392, 00:14:42.462 "percent": 90 00:14:42.462 } 00:14:42.462 }, 00:14:42.462 "base_bdevs_list": [ 00:14:42.462 { 00:14:42.462 "name": "spare", 00:14:42.462 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:42.462 "is_configured": true, 00:14:42.462 "data_offset": 0, 00:14:42.462 "data_size": 65536 00:14:42.462 }, 00:14:42.462 { 00:14:42.462 "name": "BaseBdev2", 00:14:42.462 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:42.462 "is_configured": true, 00:14:42.462 "data_offset": 0, 00:14:42.462 "data_size": 65536 00:14:42.462 } 00:14:42.462 ] 00:14:42.462 }' 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.462 09:48:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.721 [2024-10-11 09:48:27.139200] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:42.721 [2024-10-11 09:48:27.244887] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:42.721 [2024-10-11 09:48:27.248069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.547 89.25 IOPS, 267.75 MiB/s [2024-10-11T09:48:28.179Z] 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.547 09:48:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.547 "name": "raid_bdev1", 00:14:43.547 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:43.547 "strip_size_kb": 0, 00:14:43.547 "state": "online", 00:14:43.547 "raid_level": "raid1", 00:14:43.547 "superblock": false, 00:14:43.547 "num_base_bdevs": 2, 00:14:43.547 "num_base_bdevs_discovered": 2, 00:14:43.547 "num_base_bdevs_operational": 2, 00:14:43.547 "base_bdevs_list": [ 00:14:43.547 { 00:14:43.547 "name": "spare", 00:14:43.547 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:43.547 "is_configured": true, 00:14:43.547 "data_offset": 0, 00:14:43.547 "data_size": 65536 00:14:43.547 }, 00:14:43.547 { 00:14:43.547 "name": "BaseBdev2", 00:14:43.547 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:43.547 "is_configured": true, 00:14:43.547 "data_offset": 0, 00:14:43.547 "data_size": 65536 00:14:43.547 } 00:14:43.547 ] 00:14:43.547 }' 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.547 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.807 "name": "raid_bdev1", 00:14:43.807 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:43.807 "strip_size_kb": 0, 00:14:43.807 "state": "online", 00:14:43.807 "raid_level": "raid1", 00:14:43.807 "superblock": false, 00:14:43.807 "num_base_bdevs": 2, 00:14:43.807 "num_base_bdevs_discovered": 2, 00:14:43.807 "num_base_bdevs_operational": 2, 00:14:43.807 "base_bdevs_list": [ 00:14:43.807 { 00:14:43.807 "name": "spare", 00:14:43.807 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:43.807 "is_configured": true, 00:14:43.807 "data_offset": 0, 00:14:43.807 "data_size": 65536 00:14:43.807 }, 00:14:43.807 { 00:14:43.807 "name": "BaseBdev2", 00:14:43.807 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:43.807 "is_configured": true, 00:14:43.807 "data_offset": 0, 00:14:43.807 "data_size": 65536 00:14:43.807 } 00:14:43.807 ] 00:14:43.807 }' 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.807 "name": "raid_bdev1", 00:14:43.807 "uuid": "02ed7e97-7923-4d22-a39a-1ac47520b43d", 00:14:43.807 "strip_size_kb": 0, 00:14:43.807 "state": "online", 00:14:43.807 "raid_level": "raid1", 00:14:43.807 "superblock": false, 00:14:43.807 "num_base_bdevs": 2, 00:14:43.807 "num_base_bdevs_discovered": 2, 00:14:43.807 "num_base_bdevs_operational": 2, 00:14:43.807 "base_bdevs_list": [ 00:14:43.807 { 00:14:43.807 "name": "spare", 00:14:43.807 "uuid": "7a28cc7d-058c-5486-86ca-2ddb50f85b92", 00:14:43.807 "is_configured": true, 00:14:43.807 "data_offset": 0, 00:14:43.807 "data_size": 65536 00:14:43.807 }, 00:14:43.807 { 00:14:43.807 "name": "BaseBdev2", 00:14:43.807 "uuid": "0fbf236c-80ca-5b4b-b453-641c786e808c", 00:14:43.807 "is_configured": true, 00:14:43.807 "data_offset": 0, 00:14:43.807 "data_size": 65536 00:14:43.807 } 00:14:43.807 ] 00:14:43.807 }' 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.807 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.377 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.377 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.377 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.377 [2024-10-11 09:48:28.766439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.377 [2024-10-11 09:48:28.766537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.377 83.33 IOPS, 250.00 MiB/s 00:14:44.377 Latency(us) 00:14:44.377 [2024-10-11T09:48:29.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.377 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:44.377 raid_bdev1 : 9.04 83.10 249.31 0.00 0.00 16716.79 327.32 117220.72 00:14:44.377 [2024-10-11T09:48:29.009Z] =================================================================================================================== 00:14:44.377 [2024-10-11T09:48:29.009Z] Total : 83.10 249.31 0.00 0.00 16716.79 327.32 117220.72 00:14:44.377 { 00:14:44.377 "results": [ 00:14:44.377 { 00:14:44.377 "job": "raid_bdev1", 00:14:44.377 "core_mask": "0x1", 00:14:44.377 "workload": "randrw", 00:14:44.377 "percentage": 50, 00:14:44.377 "status": "finished", 00:14:44.377 "queue_depth": 2, 00:14:44.377 "io_size": 3145728, 00:14:44.377 "runtime": 9.036892, 00:14:44.377 "iops": 83.10379276414945, 00:14:44.377 "mibps": 249.31137829244835, 00:14:44.377 "io_failed": 0, 00:14:44.377 "io_timeout": 0, 00:14:44.377 "avg_latency_us": 16716.78943126777, 00:14:44.377 "min_latency_us": 327.32227074235806, 00:14:44.377 "max_latency_us": 117220.7231441048 00:14:44.377 } 00:14:44.377 ], 00:14:44.377 "core_count": 1 00:14:44.377 } 00:14:44.378 [2024-10-11 09:48:28.890425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.378 [2024-10-11 09:48:28.890482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.378 [2024-10-11 09:48:28.890581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.378 [2024-10-11 09:48:28.890596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.378 09:48:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:44.638 /dev/nbd0 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.638 1+0 records in 00:14:44.638 1+0 records out 00:14:44.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644886 s, 6.4 MB/s 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.638 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:44.897 /dev/nbd1 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.897 1+0 records in 00:14:44.897 1+0 records out 00:14:44.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049705 s, 8.2 MB/s 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.897 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.157 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.416 09:48:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77017 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 77017 ']' 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 77017 00:14:45.676 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77017 00:14:45.677 killing process with pid 77017 00:14:45.677 Received shutdown signal, test time was about 10.459008 seconds 00:14:45.677 00:14:45.677 Latency(us) 00:14:45.677 [2024-10-11T09:48:30.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.677 [2024-10-11T09:48:30.309Z] =================================================================================================================== 00:14:45.677 [2024-10-11T09:48:30.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77017' 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 77017 00:14:45.677 [2024-10-11 09:48:30.285603] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.677 09:48:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 77017 00:14:45.936 [2024-10-11 09:48:30.550188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:47.314 00:14:47.314 real 0m13.875s 00:14:47.314 user 0m17.413s 00:14:47.314 sys 0m1.580s 00:14:47.314 ************************************ 00:14:47.314 END TEST raid_rebuild_test_io 00:14:47.314 ************************************ 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.314 09:48:31 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:47.314 09:48:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:47.314 09:48:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.314 09:48:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.314 ************************************ 00:14:47.314 START TEST raid_rebuild_test_sb_io 00:14:47.314 ************************************ 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77413 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77413 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77413 ']' 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.314 09:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.572 Zero copy mechanism will not be used. 00:14:47.572 [2024-10-11 09:48:31.968273] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:47.572 [2024-10-11 09:48:31.968395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77413 ] 00:14:47.572 [2024-10-11 09:48:32.134181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.830 [2024-10-11 09:48:32.272897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.088 [2024-10-11 09:48:32.522286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.088 [2024-10-11 09:48:32.522362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 BaseBdev1_malloc 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 [2024-10-11 09:48:32.902868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:48.347 [2024-10-11 09:48:32.902993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.347 [2024-10-11 09:48:32.903039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:48.347 [2024-10-11 09:48:32.903083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.347 [2024-10-11 09:48:32.905392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.347 BaseBdev1 00:14:48.347 [2024-10-11 09:48:32.905485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 BaseBdev2_malloc 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 [2024-10-11 09:48:32.969090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:48.347 [2024-10-11 09:48:32.969215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.347 [2024-10-11 09:48:32.969283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:48.347 [2024-10-11 09:48:32.969323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.347 [2024-10-11 09:48:32.971771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.347 [2024-10-11 09:48:32.971868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.347 BaseBdev2 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.347 09:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.606 spare_malloc 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.606 spare_delay 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.606 [2024-10-11 09:48:33.059139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.606 [2024-10-11 09:48:33.059269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.606 [2024-10-11 09:48:33.059319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:48.606 [2024-10-11 09:48:33.059361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.606 [2024-10-11 09:48:33.061859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.606 [2024-10-11 09:48:33.061944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.606 spare 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.606 [2024-10-11 09:48:33.071173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.606 [2024-10-11 09:48:33.073309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.606 [2024-10-11 09:48:33.073646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:48.606 [2024-10-11 09:48:33.073709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:48.606 [2024-10-11 09:48:33.074077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:48.606 [2024-10-11 09:48:33.074399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:48.606 [2024-10-11 09:48:33.074486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:48.606 [2024-10-11 09:48:33.074777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.606 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.607 "name": "raid_bdev1", 00:14:48.607 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:48.607 "strip_size_kb": 0, 00:14:48.607 "state": "online", 00:14:48.607 "raid_level": "raid1", 00:14:48.607 "superblock": true, 00:14:48.607 "num_base_bdevs": 2, 00:14:48.607 "num_base_bdevs_discovered": 2, 00:14:48.607 "num_base_bdevs_operational": 2, 00:14:48.607 "base_bdevs_list": [ 00:14:48.607 { 00:14:48.607 "name": "BaseBdev1", 00:14:48.607 "uuid": "271616b9-64cb-5891-a0a6-7da646db2121", 00:14:48.607 "is_configured": true, 00:14:48.607 "data_offset": 2048, 00:14:48.607 "data_size": 63488 00:14:48.607 }, 00:14:48.607 { 00:14:48.607 "name": "BaseBdev2", 00:14:48.607 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:48.607 "is_configured": true, 00:14:48.607 "data_offset": 2048, 00:14:48.607 "data_size": 63488 00:14:48.607 } 00:14:48.607 ] 00:14:48.607 }' 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.607 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 [2024-10-11 09:48:33.510801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.173 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.173 [2024-10-11 09:48:33.590302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.174 "name": "raid_bdev1", 00:14:49.174 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:49.174 "strip_size_kb": 0, 00:14:49.174 "state": "online", 00:14:49.174 "raid_level": "raid1", 00:14:49.174 "superblock": true, 00:14:49.174 "num_base_bdevs": 2, 00:14:49.174 "num_base_bdevs_discovered": 1, 00:14:49.174 "num_base_bdevs_operational": 1, 00:14:49.174 "base_bdevs_list": [ 00:14:49.174 { 00:14:49.174 "name": null, 00:14:49.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.174 "is_configured": false, 00:14:49.174 "data_offset": 0, 00:14:49.174 "data_size": 63488 00:14:49.174 }, 00:14:49.174 { 00:14:49.174 "name": "BaseBdev2", 00:14:49.174 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:49.174 "is_configured": true, 00:14:49.174 "data_offset": 2048, 00:14:49.174 "data_size": 63488 00:14:49.174 } 00:14:49.174 ] 00:14:49.174 }' 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.174 09:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.174 [2024-10-11 09:48:33.703997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:49.174 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:49.174 Zero copy mechanism will not be used. 00:14:49.174 Running I/O for 60 seconds... 00:14:49.741 09:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.741 09:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.741 09:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.741 [2024-10-11 09:48:34.087207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.741 09:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.741 09:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:49.741 [2024-10-11 09:48:34.148284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:49.741 [2024-10-11 09:48:34.150339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.741 [2024-10-11 09:48:34.266019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:49.741 [2024-10-11 09:48:34.266729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.000 [2024-10-11 09:48:34.477878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.000 [2024-10-11 09:48:34.478236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.258 146.00 IOPS, 438.00 MiB/s [2024-10-11T09:48:34.890Z] [2024-10-11 09:48:34.818396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:50.517 [2024-10-11 09:48:34.943078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.517 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.813 [2024-10-11 09:48:35.153646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.813 "name": "raid_bdev1", 00:14:50.813 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:50.813 "strip_size_kb": 0, 00:14:50.813 "state": "online", 00:14:50.813 "raid_level": "raid1", 00:14:50.813 "superblock": true, 00:14:50.813 "num_base_bdevs": 2, 00:14:50.813 "num_base_bdevs_discovered": 2, 00:14:50.813 "num_base_bdevs_operational": 2, 00:14:50.813 "process": { 00:14:50.813 "type": "rebuild", 00:14:50.813 "target": "spare", 00:14:50.813 "progress": { 00:14:50.813 "blocks": 12288, 00:14:50.813 "percent": 19 00:14:50.813 } 00:14:50.813 }, 00:14:50.813 "base_bdevs_list": [ 00:14:50.813 { 00:14:50.813 "name": "spare", 00:14:50.813 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:50.813 "is_configured": true, 00:14:50.813 "data_offset": 2048, 00:14:50.813 "data_size": 63488 00:14:50.813 }, 00:14:50.813 { 00:14:50.813 "name": "BaseBdev2", 00:14:50.813 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:50.813 "is_configured": true, 00:14:50.813 "data_offset": 2048, 00:14:50.813 "data_size": 63488 00:14:50.813 } 00:14:50.813 ] 00:14:50.813 }' 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.813 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:50.814 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.814 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.814 [2024-10-11 09:48:35.286709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.814 [2024-10-11 09:48:35.286791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:50.814 [2024-10-11 09:48:35.394079] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.814 [2024-10-11 09:48:35.397100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.814 [2024-10-11 09:48:35.397206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.814 [2024-10-11 09:48:35.397229] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.089 [2024-10-11 09:48:35.454420] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.089 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.089 "name": "raid_bdev1", 00:14:51.089 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:51.089 "strip_size_kb": 0, 00:14:51.089 "state": "online", 00:14:51.089 "raid_level": "raid1", 00:14:51.089 "superblock": true, 00:14:51.089 "num_base_bdevs": 2, 00:14:51.089 "num_base_bdevs_discovered": 1, 00:14:51.089 "num_base_bdevs_operational": 1, 00:14:51.089 "base_bdevs_list": [ 00:14:51.089 { 00:14:51.089 "name": null, 00:14:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.090 "is_configured": false, 00:14:51.090 "data_offset": 0, 00:14:51.090 "data_size": 63488 00:14:51.090 }, 00:14:51.090 { 00:14:51.090 "name": "BaseBdev2", 00:14:51.090 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:51.090 "is_configured": true, 00:14:51.090 "data_offset": 2048, 00:14:51.090 "data_size": 63488 00:14:51.090 } 00:14:51.090 ] 00:14:51.090 }' 00:14:51.090 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.090 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.363 140.00 IOPS, 420.00 MiB/s [2024-10-11T09:48:35.995Z] 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.363 09:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.622 "name": "raid_bdev1", 00:14:51.622 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:51.622 "strip_size_kb": 0, 00:14:51.622 "state": "online", 00:14:51.622 "raid_level": "raid1", 00:14:51.622 "superblock": true, 00:14:51.622 "num_base_bdevs": 2, 00:14:51.622 "num_base_bdevs_discovered": 1, 00:14:51.622 "num_base_bdevs_operational": 1, 00:14:51.622 "base_bdevs_list": [ 00:14:51.622 { 00:14:51.622 "name": null, 00:14:51.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.622 "is_configured": false, 00:14:51.622 "data_offset": 0, 00:14:51.622 "data_size": 63488 00:14:51.622 }, 00:14:51.622 { 00:14:51.622 "name": "BaseBdev2", 00:14:51.622 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:51.622 "is_configured": true, 00:14:51.622 "data_offset": 2048, 00:14:51.622 "data_size": 63488 00:14:51.622 } 00:14:51.622 ] 00:14:51.622 }' 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.622 [2024-10-11 09:48:36.152807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.622 09:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:51.622 [2024-10-11 09:48:36.219507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:51.622 [2024-10-11 09:48:36.221832] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.881 [2024-10-11 09:48:36.347183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:51.881 [2024-10-11 09:48:36.347829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:51.881 [2024-10-11 09:48:36.466879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:51.881 [2024-10-11 09:48:36.467268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:52.400 154.33 IOPS, 463.00 MiB/s [2024-10-11T09:48:37.032Z] [2024-10-11 09:48:36.803265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.659 "name": "raid_bdev1", 00:14:52.659 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:52.659 "strip_size_kb": 0, 00:14:52.659 "state": "online", 00:14:52.659 "raid_level": "raid1", 00:14:52.659 "superblock": true, 00:14:52.659 "num_base_bdevs": 2, 00:14:52.659 "num_base_bdevs_discovered": 2, 00:14:52.659 "num_base_bdevs_operational": 2, 00:14:52.659 "process": { 00:14:52.659 "type": "rebuild", 00:14:52.659 "target": "spare", 00:14:52.659 "progress": { 00:14:52.659 "blocks": 12288, 00:14:52.659 "percent": 19 00:14:52.659 } 00:14:52.659 }, 00:14:52.659 "base_bdevs_list": [ 00:14:52.659 { 00:14:52.659 "name": "spare", 00:14:52.659 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:52.659 "is_configured": true, 00:14:52.659 "data_offset": 2048, 00:14:52.659 "data_size": 63488 00:14:52.659 }, 00:14:52.659 { 00:14:52.659 "name": "BaseBdev2", 00:14:52.659 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:52.659 "is_configured": true, 00:14:52.659 "data_offset": 2048, 00:14:52.659 "data_size": 63488 00:14:52.659 } 00:14:52.659 ] 00:14:52.659 }' 00:14:52.659 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.659 [2024-10-11 09:48:37.266665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:52.659 [2024-10-11 09:48:37.267235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:52.918 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.918 [2024-10-11 09:48:37.399316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:52.918 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.918 "name": "raid_bdev1", 00:14:52.918 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:52.918 "strip_size_kb": 0, 00:14:52.918 "state": "online", 00:14:52.918 "raid_level": "raid1", 00:14:52.918 "superblock": true, 00:14:52.918 "num_base_bdevs": 2, 00:14:52.918 "num_base_bdevs_discovered": 2, 00:14:52.918 "num_base_bdevs_operational": 2, 00:14:52.918 "process": { 00:14:52.918 "type": "rebuild", 00:14:52.918 "target": "spare", 00:14:52.918 "progress": { 00:14:52.918 "blocks": 14336, 00:14:52.918 "percent": 22 00:14:52.918 } 00:14:52.918 }, 00:14:52.918 "base_bdevs_list": [ 00:14:52.918 { 00:14:52.918 "name": "spare", 00:14:52.918 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:52.918 "is_configured": true, 00:14:52.918 "data_offset": 2048, 00:14:52.918 "data_size": 63488 00:14:52.918 }, 00:14:52.918 { 00:14:52.918 "name": "BaseBdev2", 00:14:52.919 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:52.919 "is_configured": true, 00:14:52.919 "data_offset": 2048, 00:14:52.919 "data_size": 63488 00:14:52.919 } 00:14:52.919 ] 00:14:52.919 }' 00:14:52.919 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.919 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.919 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.919 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.919 09:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.178 146.75 IOPS, 440.25 MiB/s [2024-10-11T09:48:37.810Z] [2024-10-11 09:48:37.713422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:53.437 [2024-10-11 09:48:37.914866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.005 "name": "raid_bdev1", 00:14:54.005 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:54.005 "strip_size_kb": 0, 00:14:54.005 "state": "online", 00:14:54.005 "raid_level": "raid1", 00:14:54.005 "superblock": true, 00:14:54.005 "num_base_bdevs": 2, 00:14:54.005 "num_base_bdevs_discovered": 2, 00:14:54.005 "num_base_bdevs_operational": 2, 00:14:54.005 "process": { 00:14:54.005 "type": "rebuild", 00:14:54.005 "target": "spare", 00:14:54.005 "progress": { 00:14:54.005 "blocks": 30720, 00:14:54.005 "percent": 48 00:14:54.005 } 00:14:54.005 }, 00:14:54.005 "base_bdevs_list": [ 00:14:54.005 { 00:14:54.005 "name": "spare", 00:14:54.005 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:54.005 "is_configured": true, 00:14:54.005 "data_offset": 2048, 00:14:54.005 "data_size": 63488 00:14:54.005 }, 00:14:54.005 { 00:14:54.005 "name": "BaseBdev2", 00:14:54.005 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:54.005 "is_configured": true, 00:14:54.005 "data_offset": 2048, 00:14:54.005 "data_size": 63488 00:14:54.005 } 00:14:54.005 ] 00:14:54.005 }' 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.005 09:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.205 127.80 IOPS, 383.40 MiB/s [2024-10-11T09:48:39.837Z] 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.205 [2024-10-11 09:48:39.662585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.205 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.205 "name": "raid_bdev1", 00:14:55.205 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:55.205 "strip_size_kb": 0, 00:14:55.205 "state": "online", 00:14:55.205 "raid_level": "raid1", 00:14:55.205 "superblock": true, 00:14:55.205 "num_base_bdevs": 2, 00:14:55.205 "num_base_bdevs_discovered": 2, 00:14:55.205 "num_base_bdevs_operational": 2, 00:14:55.205 "process": { 00:14:55.205 "type": "rebuild", 00:14:55.206 "target": "spare", 00:14:55.206 "progress": { 00:14:55.206 "blocks": 51200, 00:14:55.206 "percent": 80 00:14:55.206 } 00:14:55.206 }, 00:14:55.206 "base_bdevs_list": [ 00:14:55.206 { 00:14:55.206 "name": "spare", 00:14:55.206 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:55.206 "is_configured": true, 00:14:55.206 "data_offset": 2048, 00:14:55.206 "data_size": 63488 00:14:55.206 }, 00:14:55.206 { 00:14:55.206 "name": "BaseBdev2", 00:14:55.206 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:55.206 "is_configured": true, 00:14:55.206 "data_offset": 2048, 00:14:55.206 "data_size": 63488 00:14:55.206 } 00:14:55.206 ] 00:14:55.206 }' 00:14:55.206 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.206 113.00 IOPS, 339.00 MiB/s [2024-10-11T09:48:39.838Z] 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.206 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.206 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.206 09:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.773 [2024-10-11 09:48:40.316452] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:56.031 [2024-10-11 09:48:40.420306] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:56.031 [2024-10-11 09:48:40.423800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.290 100.86 IOPS, 302.57 MiB/s [2024-10-11T09:48:40.922Z] 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.290 "name": "raid_bdev1", 00:14:56.290 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:56.290 "strip_size_kb": 0, 00:14:56.290 "state": "online", 00:14:56.290 "raid_level": "raid1", 00:14:56.290 "superblock": true, 00:14:56.290 "num_base_bdevs": 2, 00:14:56.290 "num_base_bdevs_discovered": 2, 00:14:56.290 "num_base_bdevs_operational": 2, 00:14:56.290 "base_bdevs_list": [ 00:14:56.290 { 00:14:56.290 "name": "spare", 00:14:56.290 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:56.290 "is_configured": true, 00:14:56.290 "data_offset": 2048, 00:14:56.290 "data_size": 63488 00:14:56.290 }, 00:14:56.290 { 00:14:56.290 "name": "BaseBdev2", 00:14:56.290 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:56.290 "is_configured": true, 00:14:56.290 "data_offset": 2048, 00:14:56.290 "data_size": 63488 00:14:56.290 } 00:14:56.290 ] 00:14:56.290 }' 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.290 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.548 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.548 "name": "raid_bdev1", 00:14:56.548 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:56.548 "strip_size_kb": 0, 00:14:56.548 "state": "online", 00:14:56.548 "raid_level": "raid1", 00:14:56.548 "superblock": true, 00:14:56.548 "num_base_bdevs": 2, 00:14:56.548 "num_base_bdevs_discovered": 2, 00:14:56.548 "num_base_bdevs_operational": 2, 00:14:56.548 "base_bdevs_list": [ 00:14:56.548 { 00:14:56.548 "name": "spare", 00:14:56.548 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 }, 00:14:56.548 { 00:14:56.548 "name": "BaseBdev2", 00:14:56.548 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 } 00:14:56.548 ] 00:14:56.548 }' 00:14:56.548 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.548 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.548 09:48:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.548 "name": "raid_bdev1", 00:14:56.548 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:56.548 "strip_size_kb": 0, 00:14:56.548 "state": "online", 00:14:56.548 "raid_level": "raid1", 00:14:56.548 "superblock": true, 00:14:56.548 "num_base_bdevs": 2, 00:14:56.548 "num_base_bdevs_discovered": 2, 00:14:56.548 "num_base_bdevs_operational": 2, 00:14:56.548 "base_bdevs_list": [ 00:14:56.548 { 00:14:56.548 "name": "spare", 00:14:56.548 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 }, 00:14:56.548 { 00:14:56.548 "name": "BaseBdev2", 00:14:56.548 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 } 00:14:56.548 ] 00:14:56.548 }' 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.548 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.116 [2024-10-11 09:48:41.513351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.116 [2024-10-11 09:48:41.513465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.116 00:14:57.116 Latency(us) 00:14:57.116 [2024-10-11T09:48:41.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.116 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:57.116 raid_bdev1 : 7.92 94.23 282.68 0.00 0.00 15197.43 339.84 116762.83 00:14:57.116 [2024-10-11T09:48:41.748Z] =================================================================================================================== 00:14:57.116 [2024-10-11T09:48:41.748Z] Total : 94.23 282.68 0.00 0.00 15197.43 339.84 116762.83 00:14:57.116 [2024-10-11 09:48:41.632308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.116 { 00:14:57.116 "results": [ 00:14:57.116 { 00:14:57.116 "job": "raid_bdev1", 00:14:57.116 "core_mask": "0x1", 00:14:57.116 "workload": "randrw", 00:14:57.116 "percentage": 50, 00:14:57.116 "status": "finished", 00:14:57.116 "queue_depth": 2, 00:14:57.116 "io_size": 3145728, 00:14:57.116 "runtime": 7.917198, 00:14:57.116 "iops": 94.22525494499443, 00:14:57.116 "mibps": 282.6757648349833, 00:14:57.116 "io_failed": 0, 00:14:57.116 "io_timeout": 0, 00:14:57.116 "avg_latency_us": 15197.433391479448, 00:14:57.116 "min_latency_us": 339.8427947598253, 00:14:57.116 "max_latency_us": 116762.82969432314 00:14:57.116 } 00:14:57.116 ], 00:14:57.116 "core_count": 1 00:14:57.116 } 00:14:57.116 [2024-10-11 09:48:41.632452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.116 [2024-10-11 09:48:41.632577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.116 [2024-10-11 09:48:41.632594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.116 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:57.375 /dev/nbd0 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.375 1+0 records in 00:14:57.375 1+0 records out 00:14:57.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040078 s, 10.2 MB/s 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:57.375 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.376 09:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:57.634 /dev/nbd1 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:57.634 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.634 1+0 records in 00:14:57.634 1+0 records out 00:14:57.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426745 s, 9.6 MB/s 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.892 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.151 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.410 [2024-10-11 09:48:42.993483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.410 [2024-10-11 09:48:42.993608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.410 [2024-10-11 09:48:42.993678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:58.410 [2024-10-11 09:48:42.993717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.410 [2024-10-11 09:48:42.996513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.410 [2024-10-11 09:48:42.996624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.410 [2024-10-11 09:48:42.996786] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:58.410 [2024-10-11 09:48:42.996882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.410 spare 00:14:58.410 [2024-10-11 09:48:42.997086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.410 09:48:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.669 [2024-10-11 09:48:43.097033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:58.669 [2024-10-11 09:48:43.097176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:58.669 [2024-10-11 09:48:43.097585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:58.669 [2024-10-11 09:48:43.097863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:58.669 [2024-10-11 09:48:43.097919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:58.669 [2024-10-11 09:48:43.098183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.669 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.669 "name": "raid_bdev1", 00:14:58.669 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:58.669 "strip_size_kb": 0, 00:14:58.669 "state": "online", 00:14:58.669 "raid_level": "raid1", 00:14:58.669 "superblock": true, 00:14:58.669 "num_base_bdevs": 2, 00:14:58.669 "num_base_bdevs_discovered": 2, 00:14:58.669 "num_base_bdevs_operational": 2, 00:14:58.669 "base_bdevs_list": [ 00:14:58.669 { 00:14:58.669 "name": "spare", 00:14:58.669 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:58.669 "is_configured": true, 00:14:58.670 "data_offset": 2048, 00:14:58.670 "data_size": 63488 00:14:58.670 }, 00:14:58.670 { 00:14:58.670 "name": "BaseBdev2", 00:14:58.670 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:58.670 "is_configured": true, 00:14:58.670 "data_offset": 2048, 00:14:58.670 "data_size": 63488 00:14:58.670 } 00:14:58.670 ] 00:14:58.670 }' 00:14:58.670 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.670 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.239 "name": "raid_bdev1", 00:14:59.239 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:59.239 "strip_size_kb": 0, 00:14:59.239 "state": "online", 00:14:59.239 "raid_level": "raid1", 00:14:59.239 "superblock": true, 00:14:59.239 "num_base_bdevs": 2, 00:14:59.239 "num_base_bdevs_discovered": 2, 00:14:59.239 "num_base_bdevs_operational": 2, 00:14:59.239 "base_bdevs_list": [ 00:14:59.239 { 00:14:59.239 "name": "spare", 00:14:59.239 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:14:59.239 "is_configured": true, 00:14:59.239 "data_offset": 2048, 00:14:59.239 "data_size": 63488 00:14:59.239 }, 00:14:59.239 { 00:14:59.239 "name": "BaseBdev2", 00:14:59.239 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:59.239 "is_configured": true, 00:14:59.239 "data_offset": 2048, 00:14:59.239 "data_size": 63488 00:14:59.239 } 00:14:59.239 ] 00:14:59.239 }' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 [2024-10-11 09:48:43.777106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.239 "name": "raid_bdev1", 00:14:59.239 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:14:59.239 "strip_size_kb": 0, 00:14:59.239 "state": "online", 00:14:59.239 "raid_level": "raid1", 00:14:59.239 "superblock": true, 00:14:59.239 "num_base_bdevs": 2, 00:14:59.239 "num_base_bdevs_discovered": 1, 00:14:59.239 "num_base_bdevs_operational": 1, 00:14:59.239 "base_bdevs_list": [ 00:14:59.239 { 00:14:59.239 "name": null, 00:14:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.239 "is_configured": false, 00:14:59.239 "data_offset": 0, 00:14:59.239 "data_size": 63488 00:14:59.239 }, 00:14:59.239 { 00:14:59.239 "name": "BaseBdev2", 00:14:59.239 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:14:59.239 "is_configured": true, 00:14:59.239 "data_offset": 2048, 00:14:59.239 "data_size": 63488 00:14:59.239 } 00:14:59.239 ] 00:14:59.239 }' 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.239 09:48:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 09:48:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.808 09:48:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.808 09:48:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 [2024-10-11 09:48:44.268423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.808 [2024-10-11 09:48:44.268766] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:59.808 [2024-10-11 09:48:44.268838] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:59.808 [2024-10-11 09:48:44.268931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.808 [2024-10-11 09:48:44.288988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:59.808 09:48:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.808 09:48:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:59.808 [2024-10-11 09:48:44.291148] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.743 "name": "raid_bdev1", 00:15:00.743 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:00.743 "strip_size_kb": 0, 00:15:00.743 "state": "online", 00:15:00.743 "raid_level": "raid1", 00:15:00.743 "superblock": true, 00:15:00.743 "num_base_bdevs": 2, 00:15:00.743 "num_base_bdevs_discovered": 2, 00:15:00.743 "num_base_bdevs_operational": 2, 00:15:00.743 "process": { 00:15:00.743 "type": "rebuild", 00:15:00.743 "target": "spare", 00:15:00.743 "progress": { 00:15:00.743 "blocks": 20480, 00:15:00.743 "percent": 32 00:15:00.743 } 00:15:00.743 }, 00:15:00.743 "base_bdevs_list": [ 00:15:00.743 { 00:15:00.743 "name": "spare", 00:15:00.743 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:15:00.743 "is_configured": true, 00:15:00.743 "data_offset": 2048, 00:15:00.743 "data_size": 63488 00:15:00.743 }, 00:15:00.743 { 00:15:00.743 "name": "BaseBdev2", 00:15:00.743 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:00.743 "is_configured": true, 00:15:00.743 "data_offset": 2048, 00:15:00.743 "data_size": 63488 00:15:00.743 } 00:15:00.743 ] 00:15:00.743 }' 00:15:00.743 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.003 [2024-10-11 09:48:45.442640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.003 [2024-10-11 09:48:45.497521] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.003 [2024-10-11 09:48:45.497687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.003 [2024-10-11 09:48:45.497710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.003 [2024-10-11 09:48:45.497719] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.003 "name": "raid_bdev1", 00:15:01.003 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:01.003 "strip_size_kb": 0, 00:15:01.003 "state": "online", 00:15:01.003 "raid_level": "raid1", 00:15:01.003 "superblock": true, 00:15:01.003 "num_base_bdevs": 2, 00:15:01.003 "num_base_bdevs_discovered": 1, 00:15:01.003 "num_base_bdevs_operational": 1, 00:15:01.003 "base_bdevs_list": [ 00:15:01.003 { 00:15:01.003 "name": null, 00:15:01.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.003 "is_configured": false, 00:15:01.003 "data_offset": 0, 00:15:01.003 "data_size": 63488 00:15:01.003 }, 00:15:01.003 { 00:15:01.003 "name": "BaseBdev2", 00:15:01.003 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:01.003 "is_configured": true, 00:15:01.003 "data_offset": 2048, 00:15:01.003 "data_size": 63488 00:15:01.003 } 00:15:01.003 ] 00:15:01.003 }' 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.003 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.631 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.631 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.631 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.631 [2024-10-11 09:48:45.977026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.631 [2024-10-11 09:48:45.977193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.631 [2024-10-11 09:48:45.977249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:01.631 [2024-10-11 09:48:45.977283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.631 [2024-10-11 09:48:45.977889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.631 [2024-10-11 09:48:45.977952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.631 [2024-10-11 09:48:45.978099] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:01.631 [2024-10-11 09:48:45.978142] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:01.631 [2024-10-11 09:48:45.978194] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:01.631 [2024-10-11 09:48:45.978255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.631 [2024-10-11 09:48:45.997519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:01.631 spare 00:15:01.631 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.631 09:48:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:01.631 [2024-10-11 09:48:45.999806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.570 "name": "raid_bdev1", 00:15:02.570 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:02.570 "strip_size_kb": 0, 00:15:02.570 "state": "online", 00:15:02.570 "raid_level": "raid1", 00:15:02.570 "superblock": true, 00:15:02.570 "num_base_bdevs": 2, 00:15:02.570 "num_base_bdevs_discovered": 2, 00:15:02.570 "num_base_bdevs_operational": 2, 00:15:02.570 "process": { 00:15:02.570 "type": "rebuild", 00:15:02.570 "target": "spare", 00:15:02.570 "progress": { 00:15:02.570 "blocks": 20480, 00:15:02.570 "percent": 32 00:15:02.570 } 00:15:02.570 }, 00:15:02.570 "base_bdevs_list": [ 00:15:02.570 { 00:15:02.570 "name": "spare", 00:15:02.570 "uuid": "6af9133f-7a04-59ec-b712-110fff7d031e", 00:15:02.570 "is_configured": true, 00:15:02.570 "data_offset": 2048, 00:15:02.570 "data_size": 63488 00:15:02.570 }, 00:15:02.570 { 00:15:02.570 "name": "BaseBdev2", 00:15:02.570 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:02.570 "is_configured": true, 00:15:02.570 "data_offset": 2048, 00:15:02.570 "data_size": 63488 00:15:02.570 } 00:15:02.570 ] 00:15:02.570 }' 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.570 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.570 [2024-10-11 09:48:47.154960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.830 [2024-10-11 09:48:47.205877] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.830 [2024-10-11 09:48:47.206100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.830 [2024-10-11 09:48:47.206148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.830 [2024-10-11 09:48:47.206163] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.830 "name": "raid_bdev1", 00:15:02.830 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:02.830 "strip_size_kb": 0, 00:15:02.830 "state": "online", 00:15:02.830 "raid_level": "raid1", 00:15:02.830 "superblock": true, 00:15:02.830 "num_base_bdevs": 2, 00:15:02.830 "num_base_bdevs_discovered": 1, 00:15:02.830 "num_base_bdevs_operational": 1, 00:15:02.830 "base_bdevs_list": [ 00:15:02.830 { 00:15:02.830 "name": null, 00:15:02.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.830 "is_configured": false, 00:15:02.830 "data_offset": 0, 00:15:02.830 "data_size": 63488 00:15:02.830 }, 00:15:02.830 { 00:15:02.830 "name": "BaseBdev2", 00:15:02.830 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:02.830 "is_configured": true, 00:15:02.830 "data_offset": 2048, 00:15:02.830 "data_size": 63488 00:15:02.830 } 00:15:02.830 ] 00:15:02.830 }' 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.830 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.400 "name": "raid_bdev1", 00:15:03.400 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:03.400 "strip_size_kb": 0, 00:15:03.400 "state": "online", 00:15:03.400 "raid_level": "raid1", 00:15:03.400 "superblock": true, 00:15:03.400 "num_base_bdevs": 2, 00:15:03.400 "num_base_bdevs_discovered": 1, 00:15:03.400 "num_base_bdevs_operational": 1, 00:15:03.400 "base_bdevs_list": [ 00:15:03.400 { 00:15:03.400 "name": null, 00:15:03.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.400 "is_configured": false, 00:15:03.400 "data_offset": 0, 00:15:03.400 "data_size": 63488 00:15:03.400 }, 00:15:03.400 { 00:15:03.400 "name": "BaseBdev2", 00:15:03.400 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:03.400 "is_configured": true, 00:15:03.400 "data_offset": 2048, 00:15:03.400 "data_size": 63488 00:15:03.400 } 00:15:03.400 ] 00:15:03.400 }' 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 [2024-10-11 09:48:47.894037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.400 [2024-10-11 09:48:47.894201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.400 [2024-10-11 09:48:47.894254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:03.400 [2024-10-11 09:48:47.894294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.400 [2024-10-11 09:48:47.894835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.400 [2024-10-11 09:48:47.894902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.400 [2024-10-11 09:48:47.895024] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:03.400 [2024-10-11 09:48:47.895075] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:03.400 [2024-10-11 09:48:47.895088] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:03.400 [2024-10-11 09:48:47.895105] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:03.400 BaseBdev1 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.400 09:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.338 "name": "raid_bdev1", 00:15:04.338 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:04.338 "strip_size_kb": 0, 00:15:04.338 "state": "online", 00:15:04.338 "raid_level": "raid1", 00:15:04.338 "superblock": true, 00:15:04.338 "num_base_bdevs": 2, 00:15:04.338 "num_base_bdevs_discovered": 1, 00:15:04.338 "num_base_bdevs_operational": 1, 00:15:04.338 "base_bdevs_list": [ 00:15:04.338 { 00:15:04.338 "name": null, 00:15:04.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.338 "is_configured": false, 00:15:04.338 "data_offset": 0, 00:15:04.338 "data_size": 63488 00:15:04.338 }, 00:15:04.338 { 00:15:04.338 "name": "BaseBdev2", 00:15:04.338 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:04.338 "is_configured": true, 00:15:04.338 "data_offset": 2048, 00:15:04.338 "data_size": 63488 00:15:04.338 } 00:15:04.338 ] 00:15:04.338 }' 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.338 09:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.906 "name": "raid_bdev1", 00:15:04.906 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:04.906 "strip_size_kb": 0, 00:15:04.906 "state": "online", 00:15:04.906 "raid_level": "raid1", 00:15:04.906 "superblock": true, 00:15:04.906 "num_base_bdevs": 2, 00:15:04.906 "num_base_bdevs_discovered": 1, 00:15:04.906 "num_base_bdevs_operational": 1, 00:15:04.906 "base_bdevs_list": [ 00:15:04.906 { 00:15:04.906 "name": null, 00:15:04.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.906 "is_configured": false, 00:15:04.906 "data_offset": 0, 00:15:04.906 "data_size": 63488 00:15:04.906 }, 00:15:04.906 { 00:15:04.906 "name": "BaseBdev2", 00:15:04.906 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:04.906 "is_configured": true, 00:15:04.906 "data_offset": 2048, 00:15:04.906 "data_size": 63488 00:15:04.906 } 00:15:04.906 ] 00:15:04.906 }' 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.906 [2024-10-11 09:48:49.499661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.906 [2024-10-11 09:48:49.499934] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:04.906 [2024-10-11 09:48:49.499998] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:04.906 request: 00:15:04.906 { 00:15:04.906 "base_bdev": "BaseBdev1", 00:15:04.906 "raid_bdev": "raid_bdev1", 00:15:04.906 "method": "bdev_raid_add_base_bdev", 00:15:04.906 "req_id": 1 00:15:04.906 } 00:15:04.906 Got JSON-RPC error response 00:15:04.906 response: 00:15:04.906 { 00:15:04.906 "code": -22, 00:15:04.906 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:04.906 } 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:04.906 09:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.286 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.286 "name": "raid_bdev1", 00:15:06.286 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:06.286 "strip_size_kb": 0, 00:15:06.286 "state": "online", 00:15:06.286 "raid_level": "raid1", 00:15:06.286 "superblock": true, 00:15:06.286 "num_base_bdevs": 2, 00:15:06.286 "num_base_bdevs_discovered": 1, 00:15:06.286 "num_base_bdevs_operational": 1, 00:15:06.286 "base_bdevs_list": [ 00:15:06.286 { 00:15:06.286 "name": null, 00:15:06.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.286 "is_configured": false, 00:15:06.286 "data_offset": 0, 00:15:06.286 "data_size": 63488 00:15:06.286 }, 00:15:06.286 { 00:15:06.286 "name": "BaseBdev2", 00:15:06.286 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:06.286 "is_configured": true, 00:15:06.287 "data_offset": 2048, 00:15:06.287 "data_size": 63488 00:15:06.287 } 00:15:06.287 ] 00:15:06.287 }' 00:15:06.287 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.287 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.546 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.546 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.546 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.547 09:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.547 "name": "raid_bdev1", 00:15:06.547 "uuid": "b9e26b19-1e11-4c19-8a94-7809aa9ead6d", 00:15:06.547 "strip_size_kb": 0, 00:15:06.547 "state": "online", 00:15:06.547 "raid_level": "raid1", 00:15:06.547 "superblock": true, 00:15:06.547 "num_base_bdevs": 2, 00:15:06.547 "num_base_bdevs_discovered": 1, 00:15:06.547 "num_base_bdevs_operational": 1, 00:15:06.547 "base_bdevs_list": [ 00:15:06.547 { 00:15:06.547 "name": null, 00:15:06.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.547 "is_configured": false, 00:15:06.547 "data_offset": 0, 00:15:06.547 "data_size": 63488 00:15:06.547 }, 00:15:06.547 { 00:15:06.547 "name": "BaseBdev2", 00:15:06.547 "uuid": "d54addf3-8338-5030-9b6b-999bf3880c98", 00:15:06.547 "is_configured": true, 00:15:06.547 "data_offset": 2048, 00:15:06.547 "data_size": 63488 00:15:06.547 } 00:15:06.547 ] 00:15:06.547 }' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77413 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77413 ']' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77413 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77413 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.547 killing process with pid 77413 00:15:06.547 Received shutdown signal, test time was about 17.469788 seconds 00:15:06.547 00:15:06.547 Latency(us) 00:15:06.547 [2024-10-11T09:48:51.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.547 [2024-10-11T09:48:51.179Z] =================================================================================================================== 00:15:06.547 [2024-10-11T09:48:51.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77413' 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77413 00:15:06.547 [2024-10-11 09:48:51.142671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.547 09:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77413 00:15:06.547 [2024-10-11 09:48:51.142829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.547 [2024-10-11 09:48:51.142892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.547 [2024-10-11 09:48:51.142901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:06.806 [2024-10-11 09:48:51.368710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:08.186 00:15:08.186 real 0m20.653s 00:15:08.186 user 0m27.122s 00:15:08.186 sys 0m2.316s 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.186 ************************************ 00:15:08.186 END TEST raid_rebuild_test_sb_io 00:15:08.186 ************************************ 00:15:08.186 09:48:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:08.186 09:48:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:08.186 09:48:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:08.186 09:48:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.186 09:48:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.186 ************************************ 00:15:08.186 START TEST raid_rebuild_test 00:15:08.186 ************************************ 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78108 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78108 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 78108 ']' 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.186 09:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.186 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:08.186 Zero copy mechanism will not be used. 00:15:08.186 [2024-10-11 09:48:52.691145] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:08.186 [2024-10-11 09:48:52.691282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78108 ] 00:15:08.446 [2024-10-11 09:48:52.851347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.446 [2024-10-11 09:48:52.977671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.705 [2024-10-11 09:48:53.192231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.706 [2024-10-11 09:48:53.192301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.965 BaseBdev1_malloc 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.965 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.965 [2024-10-11 09:48:53.589849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.965 [2024-10-11 09:48:53.589914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.965 [2024-10-11 09:48:53.589937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.965 [2024-10-11 09:48:53.589948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.965 [2024-10-11 09:48:53.592122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.965 [2024-10-11 09:48:53.592166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.267 BaseBdev1 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.267 BaseBdev2_malloc 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.267 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.267 [2024-10-11 09:48:53.648646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:09.267 [2024-10-11 09:48:53.648712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.267 [2024-10-11 09:48:53.648731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:09.267 [2024-10-11 09:48:53.648758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.267 [2024-10-11 09:48:53.650869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.268 [2024-10-11 09:48:53.650906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:09.268 BaseBdev2 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 BaseBdev3_malloc 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 [2024-10-11 09:48:53.721105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:09.268 [2024-10-11 09:48:53.721209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.268 [2024-10-11 09:48:53.721233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:09.268 [2024-10-11 09:48:53.721245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.268 [2024-10-11 09:48:53.723339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.268 [2024-10-11 09:48:53.723382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:09.268 BaseBdev3 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 BaseBdev4_malloc 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 [2024-10-11 09:48:53.781518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:09.268 [2024-10-11 09:48:53.781595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.268 [2024-10-11 09:48:53.781624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:09.268 [2024-10-11 09:48:53.781636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.268 [2024-10-11 09:48:53.784153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.268 [2024-10-11 09:48:53.784200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:09.268 BaseBdev4 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 spare_malloc 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 spare_delay 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 [2024-10-11 09:48:53.857880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.268 [2024-10-11 09:48:53.857939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.268 [2024-10-11 09:48:53.857962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:09.268 [2024-10-11 09:48:53.857973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.268 [2024-10-11 09:48:53.860091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.268 [2024-10-11 09:48:53.860131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.268 spare 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.268 [2024-10-11 09:48:53.869910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.268 [2024-10-11 09:48:53.871745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.268 [2024-10-11 09:48:53.871837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.268 [2024-10-11 09:48:53.871899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.268 [2024-10-11 09:48:53.871998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:09.268 [2024-10-11 09:48:53.872017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:09.268 [2024-10-11 09:48:53.872301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:09.268 [2024-10-11 09:48:53.872498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:09.268 [2024-10-11 09:48:53.872529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:09.268 [2024-10-11 09:48:53.872675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.268 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.526 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.526 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.526 "name": "raid_bdev1", 00:15:09.526 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:09.526 "strip_size_kb": 0, 00:15:09.526 "state": "online", 00:15:09.526 "raid_level": "raid1", 00:15:09.526 "superblock": false, 00:15:09.526 "num_base_bdevs": 4, 00:15:09.526 "num_base_bdevs_discovered": 4, 00:15:09.526 "num_base_bdevs_operational": 4, 00:15:09.526 "base_bdevs_list": [ 00:15:09.526 { 00:15:09.526 "name": "BaseBdev1", 00:15:09.526 "uuid": "16d6aaa9-f05a-5fe8-bb95-060bbfd4207e", 00:15:09.526 "is_configured": true, 00:15:09.526 "data_offset": 0, 00:15:09.526 "data_size": 65536 00:15:09.526 }, 00:15:09.526 { 00:15:09.526 "name": "BaseBdev2", 00:15:09.526 "uuid": "82701e8b-f71f-5cc4-b666-28f1ea81f9ca", 00:15:09.526 "is_configured": true, 00:15:09.526 "data_offset": 0, 00:15:09.526 "data_size": 65536 00:15:09.526 }, 00:15:09.526 { 00:15:09.526 "name": "BaseBdev3", 00:15:09.526 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:09.526 "is_configured": true, 00:15:09.526 "data_offset": 0, 00:15:09.526 "data_size": 65536 00:15:09.526 }, 00:15:09.526 { 00:15:09.526 "name": "BaseBdev4", 00:15:09.526 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:09.526 "is_configured": true, 00:15:09.526 "data_offset": 0, 00:15:09.526 "data_size": 65536 00:15:09.526 } 00:15:09.526 ] 00:15:09.526 }' 00:15:09.526 09:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.526 09:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.784 [2024-10-11 09:48:54.361394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.784 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:10.044 [2024-10-11 09:48:54.624732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:10.044 /dev/nbd0 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:10.044 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.303 1+0 records in 00:15:10.303 1+0 records out 00:15:10.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428339 s, 9.6 MB/s 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:10.303 09:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:16.868 65536+0 records in 00:15:16.868 65536+0 records out 00:15:16.868 33554432 bytes (34 MB, 32 MiB) copied, 5.72266 s, 5.9 MB/s 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.868 [2024-10-11 09:49:00.667000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.868 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.869 [2024-10-11 09:49:00.691082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.869 "name": "raid_bdev1", 00:15:16.869 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:16.869 "strip_size_kb": 0, 00:15:16.869 "state": "online", 00:15:16.869 "raid_level": "raid1", 00:15:16.869 "superblock": false, 00:15:16.869 "num_base_bdevs": 4, 00:15:16.869 "num_base_bdevs_discovered": 3, 00:15:16.869 "num_base_bdevs_operational": 3, 00:15:16.869 "base_bdevs_list": [ 00:15:16.869 { 00:15:16.869 "name": null, 00:15:16.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.869 "is_configured": false, 00:15:16.869 "data_offset": 0, 00:15:16.869 "data_size": 65536 00:15:16.869 }, 00:15:16.869 { 00:15:16.869 "name": "BaseBdev2", 00:15:16.869 "uuid": "82701e8b-f71f-5cc4-b666-28f1ea81f9ca", 00:15:16.869 "is_configured": true, 00:15:16.869 "data_offset": 0, 00:15:16.869 "data_size": 65536 00:15:16.869 }, 00:15:16.869 { 00:15:16.869 "name": "BaseBdev3", 00:15:16.869 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:16.869 "is_configured": true, 00:15:16.869 "data_offset": 0, 00:15:16.869 "data_size": 65536 00:15:16.869 }, 00:15:16.869 { 00:15:16.869 "name": "BaseBdev4", 00:15:16.869 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:16.869 "is_configured": true, 00:15:16.869 "data_offset": 0, 00:15:16.869 "data_size": 65536 00:15:16.869 } 00:15:16.869 ] 00:15:16.869 }' 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.869 09:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.869 09:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.869 09:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.869 09:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.869 [2024-10-11 09:49:01.126376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.869 [2024-10-11 09:49:01.143529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:16.869 09:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.869 09:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:16.869 [2024-10-11 09:49:01.145403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.806 "name": "raid_bdev1", 00:15:17.806 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:17.806 "strip_size_kb": 0, 00:15:17.806 "state": "online", 00:15:17.806 "raid_level": "raid1", 00:15:17.806 "superblock": false, 00:15:17.806 "num_base_bdevs": 4, 00:15:17.806 "num_base_bdevs_discovered": 4, 00:15:17.806 "num_base_bdevs_operational": 4, 00:15:17.806 "process": { 00:15:17.806 "type": "rebuild", 00:15:17.806 "target": "spare", 00:15:17.806 "progress": { 00:15:17.806 "blocks": 20480, 00:15:17.806 "percent": 31 00:15:17.806 } 00:15:17.806 }, 00:15:17.806 "base_bdevs_list": [ 00:15:17.806 { 00:15:17.806 "name": "spare", 00:15:17.806 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:17.806 "is_configured": true, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 }, 00:15:17.806 { 00:15:17.806 "name": "BaseBdev2", 00:15:17.806 "uuid": "82701e8b-f71f-5cc4-b666-28f1ea81f9ca", 00:15:17.806 "is_configured": true, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 }, 00:15:17.806 { 00:15:17.806 "name": "BaseBdev3", 00:15:17.806 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:17.806 "is_configured": true, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 }, 00:15:17.806 { 00:15:17.806 "name": "BaseBdev4", 00:15:17.806 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:17.806 "is_configured": true, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 } 00:15:17.806 ] 00:15:17.806 }' 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.806 [2024-10-11 09:49:02.296929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.806 [2024-10-11 09:49:02.351225] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.806 [2024-10-11 09:49:02.351315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.806 [2024-10-11 09:49:02.351332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.806 [2024-10-11 09:49:02.351342] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.806 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.806 "name": "raid_bdev1", 00:15:17.806 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:17.806 "strip_size_kb": 0, 00:15:17.806 "state": "online", 00:15:17.806 "raid_level": "raid1", 00:15:17.806 "superblock": false, 00:15:17.806 "num_base_bdevs": 4, 00:15:17.806 "num_base_bdevs_discovered": 3, 00:15:17.806 "num_base_bdevs_operational": 3, 00:15:17.806 "base_bdevs_list": [ 00:15:17.806 { 00:15:17.806 "name": null, 00:15:17.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.806 "is_configured": false, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 }, 00:15:17.806 { 00:15:17.806 "name": "BaseBdev2", 00:15:17.806 "uuid": "82701e8b-f71f-5cc4-b666-28f1ea81f9ca", 00:15:17.806 "is_configured": true, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 }, 00:15:17.806 { 00:15:17.806 "name": "BaseBdev3", 00:15:17.806 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:17.806 "is_configured": true, 00:15:17.806 "data_offset": 0, 00:15:17.806 "data_size": 65536 00:15:17.806 }, 00:15:17.806 { 00:15:17.806 "name": "BaseBdev4", 00:15:17.807 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:17.807 "is_configured": true, 00:15:17.807 "data_offset": 0, 00:15:17.807 "data_size": 65536 00:15:17.807 } 00:15:17.807 ] 00:15:17.807 }' 00:15:17.807 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.807 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.375 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.375 "name": "raid_bdev1", 00:15:18.375 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:18.375 "strip_size_kb": 0, 00:15:18.375 "state": "online", 00:15:18.375 "raid_level": "raid1", 00:15:18.375 "superblock": false, 00:15:18.375 "num_base_bdevs": 4, 00:15:18.375 "num_base_bdevs_discovered": 3, 00:15:18.375 "num_base_bdevs_operational": 3, 00:15:18.375 "base_bdevs_list": [ 00:15:18.375 { 00:15:18.375 "name": null, 00:15:18.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.375 "is_configured": false, 00:15:18.375 "data_offset": 0, 00:15:18.375 "data_size": 65536 00:15:18.375 }, 00:15:18.375 { 00:15:18.375 "name": "BaseBdev2", 00:15:18.375 "uuid": "82701e8b-f71f-5cc4-b666-28f1ea81f9ca", 00:15:18.375 "is_configured": true, 00:15:18.375 "data_offset": 0, 00:15:18.375 "data_size": 65536 00:15:18.375 }, 00:15:18.375 { 00:15:18.375 "name": "BaseBdev3", 00:15:18.375 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:18.375 "is_configured": true, 00:15:18.375 "data_offset": 0, 00:15:18.375 "data_size": 65536 00:15:18.376 }, 00:15:18.376 { 00:15:18.376 "name": "BaseBdev4", 00:15:18.376 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:18.376 "is_configured": true, 00:15:18.376 "data_offset": 0, 00:15:18.376 "data_size": 65536 00:15:18.376 } 00:15:18.376 ] 00:15:18.376 }' 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.376 [2024-10-11 09:49:02.970020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.376 [2024-10-11 09:49:02.987983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.376 09:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:18.376 [2024-10-11 09:49:02.990170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.757 09:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.757 "name": "raid_bdev1", 00:15:19.757 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:19.757 "strip_size_kb": 0, 00:15:19.757 "state": "online", 00:15:19.757 "raid_level": "raid1", 00:15:19.757 "superblock": false, 00:15:19.757 "num_base_bdevs": 4, 00:15:19.757 "num_base_bdevs_discovered": 4, 00:15:19.757 "num_base_bdevs_operational": 4, 00:15:19.757 "process": { 00:15:19.757 "type": "rebuild", 00:15:19.757 "target": "spare", 00:15:19.757 "progress": { 00:15:19.757 "blocks": 20480, 00:15:19.757 "percent": 31 00:15:19.757 } 00:15:19.757 }, 00:15:19.757 "base_bdevs_list": [ 00:15:19.757 { 00:15:19.757 "name": "spare", 00:15:19.757 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 }, 00:15:19.757 { 00:15:19.757 "name": "BaseBdev2", 00:15:19.757 "uuid": "82701e8b-f71f-5cc4-b666-28f1ea81f9ca", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 }, 00:15:19.757 { 00:15:19.757 "name": "BaseBdev3", 00:15:19.757 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 }, 00:15:19.757 { 00:15:19.757 "name": "BaseBdev4", 00:15:19.757 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 } 00:15:19.757 ] 00:15:19.757 }' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 [2024-10-11 09:49:04.156949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.757 [2024-10-11 09:49:04.195891] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.757 "name": "raid_bdev1", 00:15:19.757 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:19.757 "strip_size_kb": 0, 00:15:19.757 "state": "online", 00:15:19.757 "raid_level": "raid1", 00:15:19.757 "superblock": false, 00:15:19.757 "num_base_bdevs": 4, 00:15:19.757 "num_base_bdevs_discovered": 3, 00:15:19.757 "num_base_bdevs_operational": 3, 00:15:19.757 "process": { 00:15:19.757 "type": "rebuild", 00:15:19.757 "target": "spare", 00:15:19.757 "progress": { 00:15:19.757 "blocks": 24576, 00:15:19.757 "percent": 37 00:15:19.757 } 00:15:19.757 }, 00:15:19.757 "base_bdevs_list": [ 00:15:19.757 { 00:15:19.757 "name": "spare", 00:15:19.757 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 }, 00:15:19.757 { 00:15:19.757 "name": null, 00:15:19.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.757 "is_configured": false, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 }, 00:15:19.757 { 00:15:19.757 "name": "BaseBdev3", 00:15:19.757 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 }, 00:15:19.757 { 00:15:19.757 "name": "BaseBdev4", 00:15:19.757 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:19.757 "is_configured": true, 00:15:19.757 "data_offset": 0, 00:15:19.757 "data_size": 65536 00:15:19.757 } 00:15:19.757 ] 00:15:19.757 }' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 09:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.018 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.018 "name": "raid_bdev1", 00:15:20.018 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:20.018 "strip_size_kb": 0, 00:15:20.018 "state": "online", 00:15:20.018 "raid_level": "raid1", 00:15:20.018 "superblock": false, 00:15:20.018 "num_base_bdevs": 4, 00:15:20.018 "num_base_bdevs_discovered": 3, 00:15:20.018 "num_base_bdevs_operational": 3, 00:15:20.018 "process": { 00:15:20.018 "type": "rebuild", 00:15:20.018 "target": "spare", 00:15:20.018 "progress": { 00:15:20.018 "blocks": 26624, 00:15:20.018 "percent": 40 00:15:20.018 } 00:15:20.018 }, 00:15:20.018 "base_bdevs_list": [ 00:15:20.018 { 00:15:20.018 "name": "spare", 00:15:20.018 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:20.018 "is_configured": true, 00:15:20.018 "data_offset": 0, 00:15:20.018 "data_size": 65536 00:15:20.018 }, 00:15:20.018 { 00:15:20.018 "name": null, 00:15:20.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.018 "is_configured": false, 00:15:20.018 "data_offset": 0, 00:15:20.018 "data_size": 65536 00:15:20.018 }, 00:15:20.018 { 00:15:20.018 "name": "BaseBdev3", 00:15:20.018 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:20.018 "is_configured": true, 00:15:20.018 "data_offset": 0, 00:15:20.018 "data_size": 65536 00:15:20.018 }, 00:15:20.018 { 00:15:20.018 "name": "BaseBdev4", 00:15:20.018 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:20.018 "is_configured": true, 00:15:20.018 "data_offset": 0, 00:15:20.018 "data_size": 65536 00:15:20.018 } 00:15:20.018 ] 00:15:20.018 }' 00:15:20.018 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.018 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.018 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.018 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.018 09:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.955 "name": "raid_bdev1", 00:15:20.955 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:20.955 "strip_size_kb": 0, 00:15:20.955 "state": "online", 00:15:20.955 "raid_level": "raid1", 00:15:20.955 "superblock": false, 00:15:20.955 "num_base_bdevs": 4, 00:15:20.955 "num_base_bdevs_discovered": 3, 00:15:20.955 "num_base_bdevs_operational": 3, 00:15:20.955 "process": { 00:15:20.955 "type": "rebuild", 00:15:20.955 "target": "spare", 00:15:20.955 "progress": { 00:15:20.955 "blocks": 51200, 00:15:20.955 "percent": 78 00:15:20.955 } 00:15:20.955 }, 00:15:20.955 "base_bdevs_list": [ 00:15:20.955 { 00:15:20.955 "name": "spare", 00:15:20.955 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:20.955 "is_configured": true, 00:15:20.955 "data_offset": 0, 00:15:20.955 "data_size": 65536 00:15:20.955 }, 00:15:20.955 { 00:15:20.955 "name": null, 00:15:20.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.955 "is_configured": false, 00:15:20.955 "data_offset": 0, 00:15:20.955 "data_size": 65536 00:15:20.955 }, 00:15:20.955 { 00:15:20.955 "name": "BaseBdev3", 00:15:20.955 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:20.955 "is_configured": true, 00:15:20.955 "data_offset": 0, 00:15:20.955 "data_size": 65536 00:15:20.955 }, 00:15:20.955 { 00:15:20.955 "name": "BaseBdev4", 00:15:20.955 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:20.955 "is_configured": true, 00:15:20.955 "data_offset": 0, 00:15:20.955 "data_size": 65536 00:15:20.955 } 00:15:20.955 ] 00:15:20.955 }' 00:15:20.955 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.214 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.214 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.214 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.214 09:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.785 [2024-10-11 09:49:06.205774] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:21.785 [2024-10-11 09:49:06.205869] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:21.785 [2024-10-11 09:49:06.205914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.043 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.302 "name": "raid_bdev1", 00:15:22.302 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:22.302 "strip_size_kb": 0, 00:15:22.302 "state": "online", 00:15:22.302 "raid_level": "raid1", 00:15:22.302 "superblock": false, 00:15:22.302 "num_base_bdevs": 4, 00:15:22.302 "num_base_bdevs_discovered": 3, 00:15:22.302 "num_base_bdevs_operational": 3, 00:15:22.302 "base_bdevs_list": [ 00:15:22.302 { 00:15:22.302 "name": "spare", 00:15:22.302 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:22.302 "is_configured": true, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 }, 00:15:22.302 { 00:15:22.302 "name": null, 00:15:22.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.302 "is_configured": false, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 }, 00:15:22.302 { 00:15:22.302 "name": "BaseBdev3", 00:15:22.302 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:22.302 "is_configured": true, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 }, 00:15:22.302 { 00:15:22.302 "name": "BaseBdev4", 00:15:22.302 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:22.302 "is_configured": true, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 } 00:15:22.302 ] 00:15:22.302 }' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.302 "name": "raid_bdev1", 00:15:22.302 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:22.302 "strip_size_kb": 0, 00:15:22.302 "state": "online", 00:15:22.302 "raid_level": "raid1", 00:15:22.302 "superblock": false, 00:15:22.302 "num_base_bdevs": 4, 00:15:22.302 "num_base_bdevs_discovered": 3, 00:15:22.302 "num_base_bdevs_operational": 3, 00:15:22.302 "base_bdevs_list": [ 00:15:22.302 { 00:15:22.302 "name": "spare", 00:15:22.302 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:22.302 "is_configured": true, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 }, 00:15:22.302 { 00:15:22.302 "name": null, 00:15:22.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.302 "is_configured": false, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 }, 00:15:22.302 { 00:15:22.302 "name": "BaseBdev3", 00:15:22.302 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:22.302 "is_configured": true, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 }, 00:15:22.302 { 00:15:22.302 "name": "BaseBdev4", 00:15:22.302 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:22.302 "is_configured": true, 00:15:22.302 "data_offset": 0, 00:15:22.302 "data_size": 65536 00:15:22.302 } 00:15:22.302 ] 00:15:22.302 }' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.302 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.561 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.561 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.561 "name": "raid_bdev1", 00:15:22.561 "uuid": "03f001d7-3bdc-4085-99f5-117f8c40b0cf", 00:15:22.561 "strip_size_kb": 0, 00:15:22.561 "state": "online", 00:15:22.561 "raid_level": "raid1", 00:15:22.561 "superblock": false, 00:15:22.561 "num_base_bdevs": 4, 00:15:22.561 "num_base_bdevs_discovered": 3, 00:15:22.561 "num_base_bdevs_operational": 3, 00:15:22.561 "base_bdevs_list": [ 00:15:22.561 { 00:15:22.561 "name": "spare", 00:15:22.561 "uuid": "c6c84b33-409b-5260-ad8d-20045738f6e0", 00:15:22.561 "is_configured": true, 00:15:22.561 "data_offset": 0, 00:15:22.561 "data_size": 65536 00:15:22.561 }, 00:15:22.561 { 00:15:22.561 "name": null, 00:15:22.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.561 "is_configured": false, 00:15:22.561 "data_offset": 0, 00:15:22.561 "data_size": 65536 00:15:22.561 }, 00:15:22.561 { 00:15:22.561 "name": "BaseBdev3", 00:15:22.561 "uuid": "b371bde0-941f-56bd-bec3-1a36a76225f9", 00:15:22.561 "is_configured": true, 00:15:22.561 "data_offset": 0, 00:15:22.561 "data_size": 65536 00:15:22.561 }, 00:15:22.561 { 00:15:22.561 "name": "BaseBdev4", 00:15:22.561 "uuid": "c945718d-182e-5ed7-a6aa-c223dec1b6c2", 00:15:22.561 "is_configured": true, 00:15:22.561 "data_offset": 0, 00:15:22.561 "data_size": 65536 00:15:22.561 } 00:15:22.561 ] 00:15:22.561 }' 00:15:22.561 09:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.561 09:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.820 [2024-10-11 09:49:07.332068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.820 [2024-10-11 09:49:07.332104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.820 [2024-10-11 09:49:07.332192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.820 [2024-10-11 09:49:07.332278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.820 [2024-10-11 09:49:07.332299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.820 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.821 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:23.080 /dev/nbd0 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.080 1+0 records in 00:15:23.080 1+0 records out 00:15:23.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372153 s, 11.0 MB/s 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:23.080 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:23.339 /dev/nbd1 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.339 1+0 records in 00:15:23.339 1+0 records out 00:15:23.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432931 s, 9.5 MB/s 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:23.339 09:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.597 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.854 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.855 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:24.112 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:24.112 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:24.112 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:24.112 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.112 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78108 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 78108 ']' 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 78108 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78108 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.113 killing process with pid 78108 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78108' 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 78108 00:15:24.113 Received shutdown signal, test time was about 60.000000 seconds 00:15:24.113 00:15:24.113 Latency(us) 00:15:24.113 [2024-10-11T09:49:08.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.113 [2024-10-11T09:49:08.745Z] =================================================================================================================== 00:15:24.113 [2024-10-11T09:49:08.745Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:24.113 [2024-10-11 09:49:08.605425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.113 09:49:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 78108 00:15:24.680 [2024-10-11 09:49:09.116160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.618 09:49:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:25.618 00:15:25.618 real 0m17.664s 00:15:25.618 user 0m19.833s 00:15:25.618 sys 0m3.274s 00:15:25.618 09:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.618 09:49:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.618 ************************************ 00:15:25.618 END TEST raid_rebuild_test 00:15:25.618 ************************************ 00:15:25.878 09:49:10 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:25.878 09:49:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:25.878 09:49:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.878 09:49:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 ************************************ 00:15:25.878 START TEST raid_rebuild_test_sb 00:15:25.878 ************************************ 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78558 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78558 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78558 ']' 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.878 09:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:25.878 Zero copy mechanism will not be used. 00:15:25.878 [2024-10-11 09:49:10.433781] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:25.878 [2024-10-11 09:49:10.433916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78558 ] 00:15:26.141 [2024-10-11 09:49:10.603440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.404 [2024-10-11 09:49:10.775054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.404 [2024-10-11 09:49:11.029864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.404 [2024-10-11 09:49:11.029941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.973 BaseBdev1_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.973 [2024-10-11 09:49:11.413356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.973 [2024-10-11 09:49:11.413446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.973 [2024-10-11 09:49:11.413477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.973 [2024-10-11 09:49:11.413492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.973 [2024-10-11 09:49:11.416108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.973 [2024-10-11 09:49:11.416154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.973 BaseBdev1 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.973 BaseBdev2_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.973 [2024-10-11 09:49:11.477891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:26.973 [2024-10-11 09:49:11.477980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.973 [2024-10-11 09:49:11.478005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.973 [2024-10-11 09:49:11.478019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.973 [2024-10-11 09:49:11.480493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.973 [2024-10-11 09:49:11.480539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.973 BaseBdev2 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.973 BaseBdev3_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.973 [2024-10-11 09:49:11.559698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:26.973 [2024-10-11 09:49:11.559783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.973 [2024-10-11 09:49:11.559811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:26.973 [2024-10-11 09:49:11.559824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.973 [2024-10-11 09:49:11.562260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.973 [2024-10-11 09:49:11.562301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:26.973 BaseBdev3 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.973 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.232 BaseBdev4_malloc 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.232 [2024-10-11 09:49:11.625045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:27.232 [2024-10-11 09:49:11.625129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.232 [2024-10-11 09:49:11.625155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:27.232 [2024-10-11 09:49:11.625168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.232 [2024-10-11 09:49:11.627601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.232 [2024-10-11 09:49:11.627647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:27.232 BaseBdev4 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.232 spare_malloc 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.232 spare_delay 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.232 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.232 [2024-10-11 09:49:11.703092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:27.232 [2024-10-11 09:49:11.703173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.232 [2024-10-11 09:49:11.703200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:27.232 [2024-10-11 09:49:11.703212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.233 [2024-10-11 09:49:11.705765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.233 [2024-10-11 09:49:11.705821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:27.233 spare 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.233 [2024-10-11 09:49:11.715151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.233 [2024-10-11 09:49:11.717336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.233 [2024-10-11 09:49:11.717427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.233 [2024-10-11 09:49:11.717493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.233 [2024-10-11 09:49:11.717748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:27.233 [2024-10-11 09:49:11.717771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:27.233 [2024-10-11 09:49:11.718124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:27.233 [2024-10-11 09:49:11.718360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:27.233 [2024-10-11 09:49:11.718380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:27.233 [2024-10-11 09:49:11.718577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.233 "name": "raid_bdev1", 00:15:27.233 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:27.233 "strip_size_kb": 0, 00:15:27.233 "state": "online", 00:15:27.233 "raid_level": "raid1", 00:15:27.233 "superblock": true, 00:15:27.233 "num_base_bdevs": 4, 00:15:27.233 "num_base_bdevs_discovered": 4, 00:15:27.233 "num_base_bdevs_operational": 4, 00:15:27.233 "base_bdevs_list": [ 00:15:27.233 { 00:15:27.233 "name": "BaseBdev1", 00:15:27.233 "uuid": "b38462ac-52b8-59b3-9848-3111768e58cd", 00:15:27.233 "is_configured": true, 00:15:27.233 "data_offset": 2048, 00:15:27.233 "data_size": 63488 00:15:27.233 }, 00:15:27.233 { 00:15:27.233 "name": "BaseBdev2", 00:15:27.233 "uuid": "57e88d27-ab75-538a-88d4-49ffa4e15fe1", 00:15:27.233 "is_configured": true, 00:15:27.233 "data_offset": 2048, 00:15:27.233 "data_size": 63488 00:15:27.233 }, 00:15:27.233 { 00:15:27.233 "name": "BaseBdev3", 00:15:27.233 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:27.233 "is_configured": true, 00:15:27.233 "data_offset": 2048, 00:15:27.233 "data_size": 63488 00:15:27.233 }, 00:15:27.233 { 00:15:27.233 "name": "BaseBdev4", 00:15:27.233 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:27.233 "is_configured": true, 00:15:27.233 "data_offset": 2048, 00:15:27.233 "data_size": 63488 00:15:27.233 } 00:15:27.233 ] 00:15:27.233 }' 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.233 09:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.800 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.800 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.800 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.800 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.801 [2024-10-11 09:49:12.214701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.801 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:28.060 [2024-10-11 09:49:12.541933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:28.060 /dev/nbd0 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.060 1+0 records in 00:15:28.060 1+0 records out 00:15:28.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508084 s, 8.1 MB/s 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:28.060 09:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:36.182 63488+0 records in 00:15:36.182 63488+0 records out 00:15:36.182 32505856 bytes (33 MB, 31 MiB) copied, 6.77901 s, 4.8 MB/s 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:36.182 [2024-10-11 09:49:19.659100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.182 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.183 [2024-10-11 09:49:19.699351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.183 "name": "raid_bdev1", 00:15:36.183 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:36.183 "strip_size_kb": 0, 00:15:36.183 "state": "online", 00:15:36.183 "raid_level": "raid1", 00:15:36.183 "superblock": true, 00:15:36.183 "num_base_bdevs": 4, 00:15:36.183 "num_base_bdevs_discovered": 3, 00:15:36.183 "num_base_bdevs_operational": 3, 00:15:36.183 "base_bdevs_list": [ 00:15:36.183 { 00:15:36.183 "name": null, 00:15:36.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.183 "is_configured": false, 00:15:36.183 "data_offset": 0, 00:15:36.183 "data_size": 63488 00:15:36.183 }, 00:15:36.183 { 00:15:36.183 "name": "BaseBdev2", 00:15:36.183 "uuid": "57e88d27-ab75-538a-88d4-49ffa4e15fe1", 00:15:36.183 "is_configured": true, 00:15:36.183 "data_offset": 2048, 00:15:36.183 "data_size": 63488 00:15:36.183 }, 00:15:36.183 { 00:15:36.183 "name": "BaseBdev3", 00:15:36.183 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:36.183 "is_configured": true, 00:15:36.183 "data_offset": 2048, 00:15:36.183 "data_size": 63488 00:15:36.183 }, 00:15:36.183 { 00:15:36.183 "name": "BaseBdev4", 00:15:36.183 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:36.183 "is_configured": true, 00:15:36.183 "data_offset": 2048, 00:15:36.183 "data_size": 63488 00:15:36.183 } 00:15:36.183 ] 00:15:36.183 }' 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.183 09:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.183 09:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.183 09:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.183 09:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.183 [2024-10-11 09:49:20.222549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.183 [2024-10-11 09:49:20.243252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:36.183 09:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.183 [2024-10-11 09:49:20.245704] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.183 09:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.750 "name": "raid_bdev1", 00:15:36.750 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:36.750 "strip_size_kb": 0, 00:15:36.750 "state": "online", 00:15:36.750 "raid_level": "raid1", 00:15:36.750 "superblock": true, 00:15:36.750 "num_base_bdevs": 4, 00:15:36.750 "num_base_bdevs_discovered": 4, 00:15:36.750 "num_base_bdevs_operational": 4, 00:15:36.750 "process": { 00:15:36.750 "type": "rebuild", 00:15:36.750 "target": "spare", 00:15:36.750 "progress": { 00:15:36.750 "blocks": 20480, 00:15:36.750 "percent": 32 00:15:36.750 } 00:15:36.750 }, 00:15:36.750 "base_bdevs_list": [ 00:15:36.750 { 00:15:36.750 "name": "spare", 00:15:36.750 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:36.750 "is_configured": true, 00:15:36.750 "data_offset": 2048, 00:15:36.750 "data_size": 63488 00:15:36.750 }, 00:15:36.750 { 00:15:36.750 "name": "BaseBdev2", 00:15:36.750 "uuid": "57e88d27-ab75-538a-88d4-49ffa4e15fe1", 00:15:36.750 "is_configured": true, 00:15:36.750 "data_offset": 2048, 00:15:36.750 "data_size": 63488 00:15:36.750 }, 00:15:36.750 { 00:15:36.750 "name": "BaseBdev3", 00:15:36.750 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:36.750 "is_configured": true, 00:15:36.750 "data_offset": 2048, 00:15:36.750 "data_size": 63488 00:15:36.750 }, 00:15:36.750 { 00:15:36.750 "name": "BaseBdev4", 00:15:36.750 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:36.750 "is_configured": true, 00:15:36.750 "data_offset": 2048, 00:15:36.750 "data_size": 63488 00:15:36.750 } 00:15:36.750 ] 00:15:36.750 }' 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.750 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.009 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.009 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:37.009 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.010 [2024-10-11 09:49:21.400950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.010 [2024-10-11 09:49:21.452056] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.010 [2024-10-11 09:49:21.452269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.010 [2024-10-11 09:49:21.452294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.010 [2024-10-11 09:49:21.452308] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.010 "name": "raid_bdev1", 00:15:37.010 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:37.010 "strip_size_kb": 0, 00:15:37.010 "state": "online", 00:15:37.010 "raid_level": "raid1", 00:15:37.010 "superblock": true, 00:15:37.010 "num_base_bdevs": 4, 00:15:37.010 "num_base_bdevs_discovered": 3, 00:15:37.010 "num_base_bdevs_operational": 3, 00:15:37.010 "base_bdevs_list": [ 00:15:37.010 { 00:15:37.010 "name": null, 00:15:37.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.010 "is_configured": false, 00:15:37.010 "data_offset": 0, 00:15:37.010 "data_size": 63488 00:15:37.010 }, 00:15:37.010 { 00:15:37.010 "name": "BaseBdev2", 00:15:37.010 "uuid": "57e88d27-ab75-538a-88d4-49ffa4e15fe1", 00:15:37.010 "is_configured": true, 00:15:37.010 "data_offset": 2048, 00:15:37.010 "data_size": 63488 00:15:37.010 }, 00:15:37.010 { 00:15:37.010 "name": "BaseBdev3", 00:15:37.010 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:37.010 "is_configured": true, 00:15:37.010 "data_offset": 2048, 00:15:37.010 "data_size": 63488 00:15:37.010 }, 00:15:37.010 { 00:15:37.010 "name": "BaseBdev4", 00:15:37.010 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:37.010 "is_configured": true, 00:15:37.010 "data_offset": 2048, 00:15:37.010 "data_size": 63488 00:15:37.010 } 00:15:37.010 ] 00:15:37.010 }' 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.010 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.578 09:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.578 "name": "raid_bdev1", 00:15:37.578 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:37.578 "strip_size_kb": 0, 00:15:37.578 "state": "online", 00:15:37.578 "raid_level": "raid1", 00:15:37.578 "superblock": true, 00:15:37.578 "num_base_bdevs": 4, 00:15:37.578 "num_base_bdevs_discovered": 3, 00:15:37.578 "num_base_bdevs_operational": 3, 00:15:37.578 "base_bdevs_list": [ 00:15:37.578 { 00:15:37.578 "name": null, 00:15:37.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.578 "is_configured": false, 00:15:37.578 "data_offset": 0, 00:15:37.578 "data_size": 63488 00:15:37.578 }, 00:15:37.578 { 00:15:37.578 "name": "BaseBdev2", 00:15:37.578 "uuid": "57e88d27-ab75-538a-88d4-49ffa4e15fe1", 00:15:37.578 "is_configured": true, 00:15:37.578 "data_offset": 2048, 00:15:37.578 "data_size": 63488 00:15:37.578 }, 00:15:37.578 { 00:15:37.578 "name": "BaseBdev3", 00:15:37.578 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:37.578 "is_configured": true, 00:15:37.578 "data_offset": 2048, 00:15:37.578 "data_size": 63488 00:15:37.578 }, 00:15:37.578 { 00:15:37.578 "name": "BaseBdev4", 00:15:37.578 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:37.578 "is_configured": true, 00:15:37.578 "data_offset": 2048, 00:15:37.578 "data_size": 63488 00:15:37.578 } 00:15:37.578 ] 00:15:37.578 }' 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.578 [2024-10-11 09:49:22.111317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.578 [2024-10-11 09:49:22.130524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.578 09:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.578 [2024-10-11 09:49:22.132976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.514 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.514 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.514 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.514 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.514 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.773 "name": "raid_bdev1", 00:15:38.773 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:38.773 "strip_size_kb": 0, 00:15:38.773 "state": "online", 00:15:38.773 "raid_level": "raid1", 00:15:38.773 "superblock": true, 00:15:38.773 "num_base_bdevs": 4, 00:15:38.773 "num_base_bdevs_discovered": 4, 00:15:38.773 "num_base_bdevs_operational": 4, 00:15:38.773 "process": { 00:15:38.773 "type": "rebuild", 00:15:38.773 "target": "spare", 00:15:38.773 "progress": { 00:15:38.773 "blocks": 20480, 00:15:38.773 "percent": 32 00:15:38.773 } 00:15:38.773 }, 00:15:38.773 "base_bdevs_list": [ 00:15:38.773 { 00:15:38.773 "name": "spare", 00:15:38.773 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:38.773 "is_configured": true, 00:15:38.773 "data_offset": 2048, 00:15:38.773 "data_size": 63488 00:15:38.773 }, 00:15:38.773 { 00:15:38.773 "name": "BaseBdev2", 00:15:38.773 "uuid": "57e88d27-ab75-538a-88d4-49ffa4e15fe1", 00:15:38.773 "is_configured": true, 00:15:38.773 "data_offset": 2048, 00:15:38.773 "data_size": 63488 00:15:38.773 }, 00:15:38.773 { 00:15:38.773 "name": "BaseBdev3", 00:15:38.773 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:38.773 "is_configured": true, 00:15:38.773 "data_offset": 2048, 00:15:38.773 "data_size": 63488 00:15:38.773 }, 00:15:38.773 { 00:15:38.773 "name": "BaseBdev4", 00:15:38.773 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:38.773 "is_configured": true, 00:15:38.773 "data_offset": 2048, 00:15:38.773 "data_size": 63488 00:15:38.773 } 00:15:38.773 ] 00:15:38.773 }' 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.773 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:38.774 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.774 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.774 [2024-10-11 09:49:23.288438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.033 [2024-10-11 09:49:23.439601] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.033 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.033 "name": "raid_bdev1", 00:15:39.033 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:39.033 "strip_size_kb": 0, 00:15:39.033 "state": "online", 00:15:39.033 "raid_level": "raid1", 00:15:39.033 "superblock": true, 00:15:39.033 "num_base_bdevs": 4, 00:15:39.033 "num_base_bdevs_discovered": 3, 00:15:39.033 "num_base_bdevs_operational": 3, 00:15:39.033 "process": { 00:15:39.033 "type": "rebuild", 00:15:39.033 "target": "spare", 00:15:39.033 "progress": { 00:15:39.033 "blocks": 24576, 00:15:39.033 "percent": 38 00:15:39.033 } 00:15:39.033 }, 00:15:39.033 "base_bdevs_list": [ 00:15:39.033 { 00:15:39.033 "name": "spare", 00:15:39.033 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:39.033 "is_configured": true, 00:15:39.033 "data_offset": 2048, 00:15:39.033 "data_size": 63488 00:15:39.033 }, 00:15:39.033 { 00:15:39.033 "name": null, 00:15:39.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.033 "is_configured": false, 00:15:39.033 "data_offset": 0, 00:15:39.033 "data_size": 63488 00:15:39.033 }, 00:15:39.033 { 00:15:39.033 "name": "BaseBdev3", 00:15:39.033 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:39.033 "is_configured": true, 00:15:39.033 "data_offset": 2048, 00:15:39.033 "data_size": 63488 00:15:39.033 }, 00:15:39.033 { 00:15:39.033 "name": "BaseBdev4", 00:15:39.034 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:39.034 "is_configured": true, 00:15:39.034 "data_offset": 2048, 00:15:39.034 "data_size": 63488 00:15:39.034 } 00:15:39.034 ] 00:15:39.034 }' 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.034 "name": "raid_bdev1", 00:15:39.034 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:39.034 "strip_size_kb": 0, 00:15:39.034 "state": "online", 00:15:39.034 "raid_level": "raid1", 00:15:39.034 "superblock": true, 00:15:39.034 "num_base_bdevs": 4, 00:15:39.034 "num_base_bdevs_discovered": 3, 00:15:39.034 "num_base_bdevs_operational": 3, 00:15:39.034 "process": { 00:15:39.034 "type": "rebuild", 00:15:39.034 "target": "spare", 00:15:39.034 "progress": { 00:15:39.034 "blocks": 26624, 00:15:39.034 "percent": 41 00:15:39.034 } 00:15:39.034 }, 00:15:39.034 "base_bdevs_list": [ 00:15:39.034 { 00:15:39.034 "name": "spare", 00:15:39.034 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:39.034 "is_configured": true, 00:15:39.034 "data_offset": 2048, 00:15:39.034 "data_size": 63488 00:15:39.034 }, 00:15:39.034 { 00:15:39.034 "name": null, 00:15:39.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.034 "is_configured": false, 00:15:39.034 "data_offset": 0, 00:15:39.034 "data_size": 63488 00:15:39.034 }, 00:15:39.034 { 00:15:39.034 "name": "BaseBdev3", 00:15:39.034 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:39.034 "is_configured": true, 00:15:39.034 "data_offset": 2048, 00:15:39.034 "data_size": 63488 00:15:39.034 }, 00:15:39.034 { 00:15:39.034 "name": "BaseBdev4", 00:15:39.034 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:39.034 "is_configured": true, 00:15:39.034 "data_offset": 2048, 00:15:39.034 "data_size": 63488 00:15:39.034 } 00:15:39.034 ] 00:15:39.034 }' 00:15:39.034 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.293 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.293 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.293 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.293 09:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.230 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.230 "name": "raid_bdev1", 00:15:40.230 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:40.230 "strip_size_kb": 0, 00:15:40.230 "state": "online", 00:15:40.230 "raid_level": "raid1", 00:15:40.230 "superblock": true, 00:15:40.230 "num_base_bdevs": 4, 00:15:40.230 "num_base_bdevs_discovered": 3, 00:15:40.230 "num_base_bdevs_operational": 3, 00:15:40.230 "process": { 00:15:40.230 "type": "rebuild", 00:15:40.230 "target": "spare", 00:15:40.230 "progress": { 00:15:40.230 "blocks": 49152, 00:15:40.230 "percent": 77 00:15:40.230 } 00:15:40.230 }, 00:15:40.230 "base_bdevs_list": [ 00:15:40.230 { 00:15:40.230 "name": "spare", 00:15:40.230 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:40.230 "is_configured": true, 00:15:40.230 "data_offset": 2048, 00:15:40.230 "data_size": 63488 00:15:40.230 }, 00:15:40.231 { 00:15:40.231 "name": null, 00:15:40.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.231 "is_configured": false, 00:15:40.231 "data_offset": 0, 00:15:40.231 "data_size": 63488 00:15:40.231 }, 00:15:40.231 { 00:15:40.231 "name": "BaseBdev3", 00:15:40.231 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:40.231 "is_configured": true, 00:15:40.231 "data_offset": 2048, 00:15:40.231 "data_size": 63488 00:15:40.231 }, 00:15:40.231 { 00:15:40.231 "name": "BaseBdev4", 00:15:40.231 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:40.231 "is_configured": true, 00:15:40.231 "data_offset": 2048, 00:15:40.231 "data_size": 63488 00:15:40.231 } 00:15:40.231 ] 00:15:40.231 }' 00:15:40.231 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.231 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.231 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.490 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.490 09:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.749 [2024-10-11 09:49:25.349814] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:40.749 [2024-10-11 09:49:25.350012] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:40.749 [2024-10-11 09:49:25.350168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.318 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.318 "name": "raid_bdev1", 00:15:41.318 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:41.318 "strip_size_kb": 0, 00:15:41.318 "state": "online", 00:15:41.318 "raid_level": "raid1", 00:15:41.318 "superblock": true, 00:15:41.318 "num_base_bdevs": 4, 00:15:41.318 "num_base_bdevs_discovered": 3, 00:15:41.318 "num_base_bdevs_operational": 3, 00:15:41.318 "base_bdevs_list": [ 00:15:41.318 { 00:15:41.318 "name": "spare", 00:15:41.318 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:41.318 "is_configured": true, 00:15:41.318 "data_offset": 2048, 00:15:41.318 "data_size": 63488 00:15:41.318 }, 00:15:41.318 { 00:15:41.318 "name": null, 00:15:41.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.318 "is_configured": false, 00:15:41.318 "data_offset": 0, 00:15:41.318 "data_size": 63488 00:15:41.318 }, 00:15:41.318 { 00:15:41.318 "name": "BaseBdev3", 00:15:41.318 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:41.318 "is_configured": true, 00:15:41.318 "data_offset": 2048, 00:15:41.318 "data_size": 63488 00:15:41.318 }, 00:15:41.318 { 00:15:41.318 "name": "BaseBdev4", 00:15:41.318 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:41.318 "is_configured": true, 00:15:41.318 "data_offset": 2048, 00:15:41.318 "data_size": 63488 00:15:41.318 } 00:15:41.318 ] 00:15:41.318 }' 00:15:41.577 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.577 09:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.577 "name": "raid_bdev1", 00:15:41.577 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:41.577 "strip_size_kb": 0, 00:15:41.577 "state": "online", 00:15:41.577 "raid_level": "raid1", 00:15:41.577 "superblock": true, 00:15:41.577 "num_base_bdevs": 4, 00:15:41.577 "num_base_bdevs_discovered": 3, 00:15:41.577 "num_base_bdevs_operational": 3, 00:15:41.577 "base_bdevs_list": [ 00:15:41.577 { 00:15:41.577 "name": "spare", 00:15:41.577 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:41.577 "is_configured": true, 00:15:41.577 "data_offset": 2048, 00:15:41.577 "data_size": 63488 00:15:41.577 }, 00:15:41.577 { 00:15:41.577 "name": null, 00:15:41.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.577 "is_configured": false, 00:15:41.577 "data_offset": 0, 00:15:41.577 "data_size": 63488 00:15:41.577 }, 00:15:41.577 { 00:15:41.577 "name": "BaseBdev3", 00:15:41.577 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:41.577 "is_configured": true, 00:15:41.577 "data_offset": 2048, 00:15:41.577 "data_size": 63488 00:15:41.577 }, 00:15:41.577 { 00:15:41.577 "name": "BaseBdev4", 00:15:41.577 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:41.577 "is_configured": true, 00:15:41.577 "data_offset": 2048, 00:15:41.577 "data_size": 63488 00:15:41.577 } 00:15:41.577 ] 00:15:41.577 }' 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.577 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.837 "name": "raid_bdev1", 00:15:41.837 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:41.837 "strip_size_kb": 0, 00:15:41.837 "state": "online", 00:15:41.837 "raid_level": "raid1", 00:15:41.837 "superblock": true, 00:15:41.837 "num_base_bdevs": 4, 00:15:41.837 "num_base_bdevs_discovered": 3, 00:15:41.837 "num_base_bdevs_operational": 3, 00:15:41.837 "base_bdevs_list": [ 00:15:41.837 { 00:15:41.837 "name": "spare", 00:15:41.837 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:41.837 "is_configured": true, 00:15:41.837 "data_offset": 2048, 00:15:41.837 "data_size": 63488 00:15:41.837 }, 00:15:41.837 { 00:15:41.837 "name": null, 00:15:41.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.837 "is_configured": false, 00:15:41.837 "data_offset": 0, 00:15:41.837 "data_size": 63488 00:15:41.837 }, 00:15:41.837 { 00:15:41.837 "name": "BaseBdev3", 00:15:41.837 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:41.837 "is_configured": true, 00:15:41.837 "data_offset": 2048, 00:15:41.837 "data_size": 63488 00:15:41.837 }, 00:15:41.837 { 00:15:41.837 "name": "BaseBdev4", 00:15:41.837 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:41.837 "is_configured": true, 00:15:41.837 "data_offset": 2048, 00:15:41.837 "data_size": 63488 00:15:41.837 } 00:15:41.837 ] 00:15:41.837 }' 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.837 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.097 [2024-10-11 09:49:26.658632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.097 [2024-10-11 09:49:26.658677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.097 [2024-10-11 09:49:26.658815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.097 [2024-10-11 09:49:26.658934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.097 [2024-10-11 09:49:26.658959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.097 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:42.357 /dev/nbd0 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.357 1+0 records in 00:15:42.357 1+0 records out 00:15:42.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443249 s, 9.2 MB/s 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:42.357 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.617 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:42.617 09:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:42.617 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.617 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.617 09:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:42.617 /dev/nbd1 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.877 1+0 records in 00:15:42.877 1+0 records out 00:15:42.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651661 s, 6.3 MB/s 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.877 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.446 09:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.446 [2024-10-11 09:49:28.034856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.446 [2024-10-11 09:49:28.034916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.446 [2024-10-11 09:49:28.034942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:43.446 [2024-10-11 09:49:28.034951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.446 [2024-10-11 09:49:28.037347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.446 spare 00:15:43.446 [2024-10-11 09:49:28.037431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.446 [2024-10-11 09:49:28.037537] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.446 [2024-10-11 09:49:28.037605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.446 [2024-10-11 09:49:28.037773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.446 [2024-10-11 09:49:28.037880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.446 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.706 [2024-10-11 09:49:28.137778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:43.706 [2024-10-11 09:49:28.137809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:43.706 [2024-10-11 09:49:28.138120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:43.706 [2024-10-11 09:49:28.138292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:43.706 [2024-10-11 09:49:28.138303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:43.706 [2024-10-11 09:49:28.138463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.706 "name": "raid_bdev1", 00:15:43.706 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:43.706 "strip_size_kb": 0, 00:15:43.706 "state": "online", 00:15:43.706 "raid_level": "raid1", 00:15:43.706 "superblock": true, 00:15:43.706 "num_base_bdevs": 4, 00:15:43.706 "num_base_bdevs_discovered": 3, 00:15:43.706 "num_base_bdevs_operational": 3, 00:15:43.706 "base_bdevs_list": [ 00:15:43.706 { 00:15:43.706 "name": "spare", 00:15:43.706 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:43.706 "is_configured": true, 00:15:43.706 "data_offset": 2048, 00:15:43.706 "data_size": 63488 00:15:43.706 }, 00:15:43.706 { 00:15:43.706 "name": null, 00:15:43.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.706 "is_configured": false, 00:15:43.706 "data_offset": 2048, 00:15:43.706 "data_size": 63488 00:15:43.706 }, 00:15:43.706 { 00:15:43.706 "name": "BaseBdev3", 00:15:43.706 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:43.706 "is_configured": true, 00:15:43.706 "data_offset": 2048, 00:15:43.706 "data_size": 63488 00:15:43.706 }, 00:15:43.706 { 00:15:43.706 "name": "BaseBdev4", 00:15:43.706 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:43.706 "is_configured": true, 00:15:43.706 "data_offset": 2048, 00:15:43.706 "data_size": 63488 00:15:43.706 } 00:15:43.706 ] 00:15:43.706 }' 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.706 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.966 "name": "raid_bdev1", 00:15:43.966 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:43.966 "strip_size_kb": 0, 00:15:43.966 "state": "online", 00:15:43.966 "raid_level": "raid1", 00:15:43.966 "superblock": true, 00:15:43.966 "num_base_bdevs": 4, 00:15:43.966 "num_base_bdevs_discovered": 3, 00:15:43.966 "num_base_bdevs_operational": 3, 00:15:43.966 "base_bdevs_list": [ 00:15:43.966 { 00:15:43.966 "name": "spare", 00:15:43.966 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:43.966 "is_configured": true, 00:15:43.966 "data_offset": 2048, 00:15:43.966 "data_size": 63488 00:15:43.966 }, 00:15:43.966 { 00:15:43.966 "name": null, 00:15:43.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.966 "is_configured": false, 00:15:43.966 "data_offset": 2048, 00:15:43.966 "data_size": 63488 00:15:43.966 }, 00:15:43.966 { 00:15:43.966 "name": "BaseBdev3", 00:15:43.966 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:43.966 "is_configured": true, 00:15:43.966 "data_offset": 2048, 00:15:43.966 "data_size": 63488 00:15:43.966 }, 00:15:43.966 { 00:15:43.966 "name": "BaseBdev4", 00:15:43.966 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:43.966 "is_configured": true, 00:15:43.966 "data_offset": 2048, 00:15:43.966 "data_size": 63488 00:15:43.966 } 00:15:43.966 ] 00:15:43.966 }' 00:15:43.966 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 [2024-10-11 09:49:28.717739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.226 "name": "raid_bdev1", 00:15:44.226 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:44.226 "strip_size_kb": 0, 00:15:44.226 "state": "online", 00:15:44.226 "raid_level": "raid1", 00:15:44.226 "superblock": true, 00:15:44.226 "num_base_bdevs": 4, 00:15:44.226 "num_base_bdevs_discovered": 2, 00:15:44.226 "num_base_bdevs_operational": 2, 00:15:44.226 "base_bdevs_list": [ 00:15:44.226 { 00:15:44.226 "name": null, 00:15:44.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.226 "is_configured": false, 00:15:44.226 "data_offset": 0, 00:15:44.226 "data_size": 63488 00:15:44.226 }, 00:15:44.226 { 00:15:44.226 "name": null, 00:15:44.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.226 "is_configured": false, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 }, 00:15:44.226 { 00:15:44.226 "name": "BaseBdev3", 00:15:44.226 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:44.226 "is_configured": true, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 }, 00:15:44.226 { 00:15:44.226 "name": "BaseBdev4", 00:15:44.226 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:44.226 "is_configured": true, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 } 00:15:44.226 ] 00:15:44.226 }' 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.226 09:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.798 09:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.798 09:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.798 09:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.798 [2024-10-11 09:49:29.184976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.798 [2024-10-11 09:49:29.185249] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:44.798 [2024-10-11 09:49:29.185308] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:44.798 [2024-10-11 09:49:29.185419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.798 [2024-10-11 09:49:29.200923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:44.798 09:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.798 09:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:44.798 [2024-10-11 09:49:29.202871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.738 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.739 "name": "raid_bdev1", 00:15:45.739 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:45.739 "strip_size_kb": 0, 00:15:45.739 "state": "online", 00:15:45.739 "raid_level": "raid1", 00:15:45.739 "superblock": true, 00:15:45.739 "num_base_bdevs": 4, 00:15:45.739 "num_base_bdevs_discovered": 3, 00:15:45.739 "num_base_bdevs_operational": 3, 00:15:45.739 "process": { 00:15:45.739 "type": "rebuild", 00:15:45.739 "target": "spare", 00:15:45.739 "progress": { 00:15:45.739 "blocks": 20480, 00:15:45.739 "percent": 32 00:15:45.739 } 00:15:45.739 }, 00:15:45.739 "base_bdevs_list": [ 00:15:45.739 { 00:15:45.739 "name": "spare", 00:15:45.739 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:45.739 "is_configured": true, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 }, 00:15:45.739 { 00:15:45.739 "name": null, 00:15:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.739 "is_configured": false, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 }, 00:15:45.739 { 00:15:45.739 "name": "BaseBdev3", 00:15:45.739 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:45.739 "is_configured": true, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 }, 00:15:45.739 { 00:15:45.739 "name": "BaseBdev4", 00:15:45.739 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:45.739 "is_configured": true, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 } 00:15:45.739 ] 00:15:45.739 }' 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.739 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 [2024-10-11 09:49:30.343154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.000 [2024-10-11 09:49:30.408170] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.000 [2024-10-11 09:49:30.408228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.000 [2024-10-11 09:49:30.408264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.000 [2024-10-11 09:49:30.408271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.000 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.001 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.001 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.001 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.001 "name": "raid_bdev1", 00:15:46.001 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:46.001 "strip_size_kb": 0, 00:15:46.001 "state": "online", 00:15:46.001 "raid_level": "raid1", 00:15:46.001 "superblock": true, 00:15:46.001 "num_base_bdevs": 4, 00:15:46.001 "num_base_bdevs_discovered": 2, 00:15:46.001 "num_base_bdevs_operational": 2, 00:15:46.001 "base_bdevs_list": [ 00:15:46.001 { 00:15:46.001 "name": null, 00:15:46.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.001 "is_configured": false, 00:15:46.001 "data_offset": 0, 00:15:46.001 "data_size": 63488 00:15:46.001 }, 00:15:46.001 { 00:15:46.001 "name": null, 00:15:46.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.001 "is_configured": false, 00:15:46.001 "data_offset": 2048, 00:15:46.001 "data_size": 63488 00:15:46.001 }, 00:15:46.001 { 00:15:46.001 "name": "BaseBdev3", 00:15:46.001 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:46.001 "is_configured": true, 00:15:46.001 "data_offset": 2048, 00:15:46.001 "data_size": 63488 00:15:46.001 }, 00:15:46.001 { 00:15:46.001 "name": "BaseBdev4", 00:15:46.001 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:46.001 "is_configured": true, 00:15:46.001 "data_offset": 2048, 00:15:46.001 "data_size": 63488 00:15:46.001 } 00:15:46.001 ] 00:15:46.001 }' 00:15:46.001 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.001 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.262 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.262 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.262 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.262 [2024-10-11 09:49:30.847584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.262 [2024-10-11 09:49:30.847713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.262 [2024-10-11 09:49:30.847798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:46.262 [2024-10-11 09:49:30.847837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.262 [2024-10-11 09:49:30.848409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.262 [2024-10-11 09:49:30.848473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.262 [2024-10-11 09:49:30.848611] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.262 [2024-10-11 09:49:30.848655] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:46.262 [2024-10-11 09:49:30.848702] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.262 [2024-10-11 09:49:30.848777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.262 [2024-10-11 09:49:30.864118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:46.262 spare 00:15:46.262 09:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.262 [2024-10-11 09:49:30.866030] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.262 09:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.643 "name": "raid_bdev1", 00:15:47.643 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:47.643 "strip_size_kb": 0, 00:15:47.643 "state": "online", 00:15:47.643 "raid_level": "raid1", 00:15:47.643 "superblock": true, 00:15:47.643 "num_base_bdevs": 4, 00:15:47.643 "num_base_bdevs_discovered": 3, 00:15:47.643 "num_base_bdevs_operational": 3, 00:15:47.643 "process": { 00:15:47.643 "type": "rebuild", 00:15:47.643 "target": "spare", 00:15:47.643 "progress": { 00:15:47.643 "blocks": 20480, 00:15:47.643 "percent": 32 00:15:47.643 } 00:15:47.643 }, 00:15:47.643 "base_bdevs_list": [ 00:15:47.643 { 00:15:47.643 "name": "spare", 00:15:47.643 "uuid": "39116ab4-1083-506b-9d00-583cac341e5a", 00:15:47.643 "is_configured": true, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 }, 00:15:47.643 { 00:15:47.643 "name": null, 00:15:47.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.643 "is_configured": false, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 }, 00:15:47.643 { 00:15:47.643 "name": "BaseBdev3", 00:15:47.643 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:47.643 "is_configured": true, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 }, 00:15:47.643 { 00:15:47.643 "name": "BaseBdev4", 00:15:47.643 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:47.643 "is_configured": true, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 } 00:15:47.643 ] 00:15:47.643 }' 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.643 09:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.643 [2024-10-11 09:49:32.029357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.643 [2024-10-11 09:49:32.071584] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.643 [2024-10-11 09:49:32.071702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.643 [2024-10-11 09:49:32.071764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.643 [2024-10-11 09:49:32.071790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.643 "name": "raid_bdev1", 00:15:47.643 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:47.643 "strip_size_kb": 0, 00:15:47.643 "state": "online", 00:15:47.643 "raid_level": "raid1", 00:15:47.643 "superblock": true, 00:15:47.643 "num_base_bdevs": 4, 00:15:47.643 "num_base_bdevs_discovered": 2, 00:15:47.643 "num_base_bdevs_operational": 2, 00:15:47.643 "base_bdevs_list": [ 00:15:47.643 { 00:15:47.643 "name": null, 00:15:47.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.643 "is_configured": false, 00:15:47.643 "data_offset": 0, 00:15:47.643 "data_size": 63488 00:15:47.643 }, 00:15:47.643 { 00:15:47.643 "name": null, 00:15:47.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.643 "is_configured": false, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 }, 00:15:47.643 { 00:15:47.643 "name": "BaseBdev3", 00:15:47.643 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:47.643 "is_configured": true, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 }, 00:15:47.643 { 00:15:47.643 "name": "BaseBdev4", 00:15:47.643 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:47.643 "is_configured": true, 00:15:47.643 "data_offset": 2048, 00:15:47.643 "data_size": 63488 00:15:47.643 } 00:15:47.643 ] 00:15:47.643 }' 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.643 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.212 "name": "raid_bdev1", 00:15:48.212 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:48.212 "strip_size_kb": 0, 00:15:48.212 "state": "online", 00:15:48.212 "raid_level": "raid1", 00:15:48.212 "superblock": true, 00:15:48.212 "num_base_bdevs": 4, 00:15:48.212 "num_base_bdevs_discovered": 2, 00:15:48.212 "num_base_bdevs_operational": 2, 00:15:48.212 "base_bdevs_list": [ 00:15:48.212 { 00:15:48.212 "name": null, 00:15:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.212 "is_configured": false, 00:15:48.212 "data_offset": 0, 00:15:48.212 "data_size": 63488 00:15:48.212 }, 00:15:48.212 { 00:15:48.212 "name": null, 00:15:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.212 "is_configured": false, 00:15:48.212 "data_offset": 2048, 00:15:48.212 "data_size": 63488 00:15:48.212 }, 00:15:48.212 { 00:15:48.212 "name": "BaseBdev3", 00:15:48.212 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:48.212 "is_configured": true, 00:15:48.212 "data_offset": 2048, 00:15:48.212 "data_size": 63488 00:15:48.212 }, 00:15:48.212 { 00:15:48.212 "name": "BaseBdev4", 00:15:48.212 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:48.212 "is_configured": true, 00:15:48.212 "data_offset": 2048, 00:15:48.212 "data_size": 63488 00:15:48.212 } 00:15:48.212 ] 00:15:48.212 }' 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.212 [2024-10-11 09:49:32.666225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.212 [2024-10-11 09:49:32.666339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.212 [2024-10-11 09:49:32.666378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:48.212 [2024-10-11 09:49:32.666410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.212 [2024-10-11 09:49:32.666968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.212 [2024-10-11 09:49:32.667034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.212 [2024-10-11 09:49:32.667163] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:48.212 [2024-10-11 09:49:32.667211] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:48.212 [2024-10-11 09:49:32.667268] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:48.212 [2024-10-11 09:49:32.667320] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:48.212 BaseBdev1 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.212 09:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.152 "name": "raid_bdev1", 00:15:49.152 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:49.152 "strip_size_kb": 0, 00:15:49.152 "state": "online", 00:15:49.152 "raid_level": "raid1", 00:15:49.152 "superblock": true, 00:15:49.152 "num_base_bdevs": 4, 00:15:49.152 "num_base_bdevs_discovered": 2, 00:15:49.152 "num_base_bdevs_operational": 2, 00:15:49.152 "base_bdevs_list": [ 00:15:49.152 { 00:15:49.152 "name": null, 00:15:49.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.152 "is_configured": false, 00:15:49.152 "data_offset": 0, 00:15:49.152 "data_size": 63488 00:15:49.152 }, 00:15:49.152 { 00:15:49.152 "name": null, 00:15:49.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.152 "is_configured": false, 00:15:49.152 "data_offset": 2048, 00:15:49.152 "data_size": 63488 00:15:49.152 }, 00:15:49.152 { 00:15:49.152 "name": "BaseBdev3", 00:15:49.152 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:49.152 "is_configured": true, 00:15:49.152 "data_offset": 2048, 00:15:49.152 "data_size": 63488 00:15:49.152 }, 00:15:49.152 { 00:15:49.152 "name": "BaseBdev4", 00:15:49.152 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:49.152 "is_configured": true, 00:15:49.152 "data_offset": 2048, 00:15:49.152 "data_size": 63488 00:15:49.152 } 00:15:49.152 ] 00:15:49.152 }' 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.152 09:49:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.721 "name": "raid_bdev1", 00:15:49.721 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:49.721 "strip_size_kb": 0, 00:15:49.721 "state": "online", 00:15:49.721 "raid_level": "raid1", 00:15:49.721 "superblock": true, 00:15:49.721 "num_base_bdevs": 4, 00:15:49.721 "num_base_bdevs_discovered": 2, 00:15:49.721 "num_base_bdevs_operational": 2, 00:15:49.721 "base_bdevs_list": [ 00:15:49.721 { 00:15:49.721 "name": null, 00:15:49.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.721 "is_configured": false, 00:15:49.721 "data_offset": 0, 00:15:49.721 "data_size": 63488 00:15:49.721 }, 00:15:49.721 { 00:15:49.721 "name": null, 00:15:49.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.721 "is_configured": false, 00:15:49.721 "data_offset": 2048, 00:15:49.721 "data_size": 63488 00:15:49.721 }, 00:15:49.721 { 00:15:49.721 "name": "BaseBdev3", 00:15:49.721 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:49.721 "is_configured": true, 00:15:49.721 "data_offset": 2048, 00:15:49.721 "data_size": 63488 00:15:49.721 }, 00:15:49.721 { 00:15:49.721 "name": "BaseBdev4", 00:15:49.721 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:49.721 "is_configured": true, 00:15:49.721 "data_offset": 2048, 00:15:49.721 "data_size": 63488 00:15:49.721 } 00:15:49.721 ] 00:15:49.721 }' 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.721 [2024-10-11 09:49:34.199859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.721 [2024-10-11 09:49:34.200080] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:49.721 [2024-10-11 09:49:34.200100] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:49.721 request: 00:15:49.721 { 00:15:49.721 "base_bdev": "BaseBdev1", 00:15:49.721 "raid_bdev": "raid_bdev1", 00:15:49.721 "method": "bdev_raid_add_base_bdev", 00:15:49.721 "req_id": 1 00:15:49.721 } 00:15:49.721 Got JSON-RPC error response 00:15:49.721 response: 00:15:49.721 { 00:15:49.721 "code": -22, 00:15:49.721 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:49.721 } 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.721 09:49:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.664 "name": "raid_bdev1", 00:15:50.664 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:50.664 "strip_size_kb": 0, 00:15:50.664 "state": "online", 00:15:50.664 "raid_level": "raid1", 00:15:50.664 "superblock": true, 00:15:50.664 "num_base_bdevs": 4, 00:15:50.664 "num_base_bdevs_discovered": 2, 00:15:50.664 "num_base_bdevs_operational": 2, 00:15:50.664 "base_bdevs_list": [ 00:15:50.664 { 00:15:50.664 "name": null, 00:15:50.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.664 "is_configured": false, 00:15:50.664 "data_offset": 0, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": null, 00:15:50.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.664 "is_configured": false, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": "BaseBdev3", 00:15:50.664 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 }, 00:15:50.664 { 00:15:50.664 "name": "BaseBdev4", 00:15:50.664 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:50.664 "is_configured": true, 00:15:50.664 "data_offset": 2048, 00:15:50.664 "data_size": 63488 00:15:50.664 } 00:15:50.664 ] 00:15:50.664 }' 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.664 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.231 "name": "raid_bdev1", 00:15:51.231 "uuid": "1e3c46fd-2866-4389-a8be-8fd41e119113", 00:15:51.231 "strip_size_kb": 0, 00:15:51.231 "state": "online", 00:15:51.231 "raid_level": "raid1", 00:15:51.231 "superblock": true, 00:15:51.231 "num_base_bdevs": 4, 00:15:51.231 "num_base_bdevs_discovered": 2, 00:15:51.231 "num_base_bdevs_operational": 2, 00:15:51.231 "base_bdevs_list": [ 00:15:51.231 { 00:15:51.231 "name": null, 00:15:51.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.231 "is_configured": false, 00:15:51.231 "data_offset": 0, 00:15:51.231 "data_size": 63488 00:15:51.231 }, 00:15:51.231 { 00:15:51.231 "name": null, 00:15:51.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.231 "is_configured": false, 00:15:51.231 "data_offset": 2048, 00:15:51.231 "data_size": 63488 00:15:51.231 }, 00:15:51.231 { 00:15:51.231 "name": "BaseBdev3", 00:15:51.231 "uuid": "a7f0035f-01d9-5883-a8f7-66be346d5b53", 00:15:51.231 "is_configured": true, 00:15:51.231 "data_offset": 2048, 00:15:51.231 "data_size": 63488 00:15:51.231 }, 00:15:51.231 { 00:15:51.231 "name": "BaseBdev4", 00:15:51.231 "uuid": "08e63434-00d0-508c-9953-ca8af8f5b152", 00:15:51.231 "is_configured": true, 00:15:51.231 "data_offset": 2048, 00:15:51.231 "data_size": 63488 00:15:51.231 } 00:15:51.231 ] 00:15:51.231 }' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78558 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78558 ']' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78558 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78558 00:15:51.231 killing process with pid 78558 00:15:51.231 Received shutdown signal, test time was about 60.000000 seconds 00:15:51.231 00:15:51.231 Latency(us) 00:15:51.231 [2024-10-11T09:49:35.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.231 [2024-10-11T09:49:35.863Z] =================================================================================================================== 00:15:51.231 [2024-10-11T09:49:35.863Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78558' 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78558 00:15:51.231 [2024-10-11 09:49:35.785000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.231 09:49:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78558 00:15:51.231 [2024-10-11 09:49:35.785154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.231 [2024-10-11 09:49:35.785231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.231 [2024-10-11 09:49:35.785308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:51.821 [2024-10-11 09:49:36.295932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.199 00:15:53.199 real 0m27.165s 00:15:53.199 user 0m31.861s 00:15:53.199 sys 0m4.541s 00:15:53.199 ************************************ 00:15:53.199 END TEST raid_rebuild_test_sb 00:15:53.199 ************************************ 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.199 09:49:37 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:53.199 09:49:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:53.199 09:49:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.199 09:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.199 ************************************ 00:15:53.199 START TEST raid_rebuild_test_io 00:15:53.199 ************************************ 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79334 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79334 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 79334 ']' 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.199 09:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.199 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.199 Zero copy mechanism will not be used. 00:15:53.200 [2024-10-11 09:49:37.649916] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:53.200 [2024-10-11 09:49:37.650058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79334 ] 00:15:53.200 [2024-10-11 09:49:37.806414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.458 [2024-10-11 09:49:37.932388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.717 [2024-10-11 09:49:38.171255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.717 [2024-10-11 09:49:38.171290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.977 BaseBdev1_malloc 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.977 [2024-10-11 09:49:38.547280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.977 [2024-10-11 09:49:38.547349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.977 [2024-10-11 09:49:38.547377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.977 [2024-10-11 09:49:38.547388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.977 [2024-10-11 09:49:38.549741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.977 [2024-10-11 09:49:38.549787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.977 BaseBdev1 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.977 BaseBdev2_malloc 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.977 [2024-10-11 09:49:38.600236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:53.977 [2024-10-11 09:49:38.600343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.977 [2024-10-11 09:49:38.600367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.977 [2024-10-11 09:49:38.600378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.977 [2024-10-11 09:49:38.602583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.977 [2024-10-11 09:49:38.602625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.977 BaseBdev2 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.977 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 BaseBdev3_malloc 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 [2024-10-11 09:49:38.669979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.237 [2024-10-11 09:49:38.670044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.237 [2024-10-11 09:49:38.670069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.237 [2024-10-11 09:49:38.670081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.237 [2024-10-11 09:49:38.672345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.237 [2024-10-11 09:49:38.672385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.237 BaseBdev3 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 BaseBdev4_malloc 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 [2024-10-11 09:49:38.724649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:54.237 [2024-10-11 09:49:38.724712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.237 [2024-10-11 09:49:38.724733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:54.237 [2024-10-11 09:49:38.724758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.237 [2024-10-11 09:49:38.727086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.237 [2024-10-11 09:49:38.727126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:54.237 BaseBdev4 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 spare_malloc 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 spare_delay 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 [2024-10-11 09:49:38.793900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.237 [2024-10-11 09:49:38.793961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.237 [2024-10-11 09:49:38.793985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:54.237 [2024-10-11 09:49:38.794012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.237 [2024-10-11 09:49:38.796332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.237 [2024-10-11 09:49:38.796440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.237 spare 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 [2024-10-11 09:49:38.801943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.237 [2024-10-11 09:49:38.804033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.237 [2024-10-11 09:49:38.804150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.237 [2024-10-11 09:49:38.804244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.237 [2024-10-11 09:49:38.804359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:54.237 [2024-10-11 09:49:38.804409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:54.237 [2024-10-11 09:49:38.804692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:54.237 [2024-10-11 09:49:38.804924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:54.237 [2024-10-11 09:49:38.804971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:54.237 [2024-10-11 09:49:38.805178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.237 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.237 "name": "raid_bdev1", 00:15:54.237 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:54.237 "strip_size_kb": 0, 00:15:54.237 "state": "online", 00:15:54.237 "raid_level": "raid1", 00:15:54.237 "superblock": false, 00:15:54.237 "num_base_bdevs": 4, 00:15:54.237 "num_base_bdevs_discovered": 4, 00:15:54.237 "num_base_bdevs_operational": 4, 00:15:54.237 "base_bdevs_list": [ 00:15:54.237 { 00:15:54.237 "name": "BaseBdev1", 00:15:54.237 "uuid": "582af85b-3802-5bbc-9808-28cb85e6d295", 00:15:54.237 "is_configured": true, 00:15:54.237 "data_offset": 0, 00:15:54.237 "data_size": 65536 00:15:54.237 }, 00:15:54.237 { 00:15:54.237 "name": "BaseBdev2", 00:15:54.238 "uuid": "312132bc-6b14-53e4-8e48-e6cf234c59f9", 00:15:54.238 "is_configured": true, 00:15:54.238 "data_offset": 0, 00:15:54.238 "data_size": 65536 00:15:54.238 }, 00:15:54.238 { 00:15:54.238 "name": "BaseBdev3", 00:15:54.238 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:54.238 "is_configured": true, 00:15:54.238 "data_offset": 0, 00:15:54.238 "data_size": 65536 00:15:54.238 }, 00:15:54.238 { 00:15:54.238 "name": "BaseBdev4", 00:15:54.238 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:54.238 "is_configured": true, 00:15:54.238 "data_offset": 0, 00:15:54.238 "data_size": 65536 00:15:54.238 } 00:15:54.238 ] 00:15:54.238 }' 00:15:54.238 09:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.238 09:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.806 [2024-10-11 09:49:39.273550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.806 [2024-10-11 09:49:39.348983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.806 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.806 "name": "raid_bdev1", 00:15:54.806 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:54.806 "strip_size_kb": 0, 00:15:54.806 "state": "online", 00:15:54.806 "raid_level": "raid1", 00:15:54.806 "superblock": false, 00:15:54.806 "num_base_bdevs": 4, 00:15:54.806 "num_base_bdevs_discovered": 3, 00:15:54.806 "num_base_bdevs_operational": 3, 00:15:54.806 "base_bdevs_list": [ 00:15:54.806 { 00:15:54.806 "name": null, 00:15:54.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.806 "is_configured": false, 00:15:54.806 "data_offset": 0, 00:15:54.806 "data_size": 65536 00:15:54.806 }, 00:15:54.806 { 00:15:54.806 "name": "BaseBdev2", 00:15:54.806 "uuid": "312132bc-6b14-53e4-8e48-e6cf234c59f9", 00:15:54.806 "is_configured": true, 00:15:54.806 "data_offset": 0, 00:15:54.806 "data_size": 65536 00:15:54.807 }, 00:15:54.807 { 00:15:54.807 "name": "BaseBdev3", 00:15:54.807 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:54.807 "is_configured": true, 00:15:54.807 "data_offset": 0, 00:15:54.807 "data_size": 65536 00:15:54.807 }, 00:15:54.807 { 00:15:54.807 "name": "BaseBdev4", 00:15:54.807 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:54.807 "is_configured": true, 00:15:54.807 "data_offset": 0, 00:15:54.807 "data_size": 65536 00:15:54.807 } 00:15:54.807 ] 00:15:54.807 }' 00:15:54.807 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.807 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.807 [2024-10-11 09:49:39.434042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:54.807 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.807 Zero copy mechanism will not be used. 00:15:54.807 Running I/O for 60 seconds... 00:15:55.375 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.375 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.375 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.375 [2024-10-11 09:49:39.757889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.375 09:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.375 09:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.375 [2024-10-11 09:49:39.833961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:55.375 [2024-10-11 09:49:39.836059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.375 [2024-10-11 09:49:39.967289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:55.633 [2024-10-11 09:49:40.196519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:55.633 [2024-10-11 09:49:40.196991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:55.921 153.00 IOPS, 459.00 MiB/s [2024-10-11T09:49:40.553Z] [2024-10-11 09:49:40.456662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:56.180 [2024-10-11 09:49:40.718939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:56.180 [2024-10-11 09:49:40.719887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.439 "name": "raid_bdev1", 00:15:56.439 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:56.439 "strip_size_kb": 0, 00:15:56.439 "state": "online", 00:15:56.439 "raid_level": "raid1", 00:15:56.439 "superblock": false, 00:15:56.439 "num_base_bdevs": 4, 00:15:56.439 "num_base_bdevs_discovered": 4, 00:15:56.439 "num_base_bdevs_operational": 4, 00:15:56.439 "process": { 00:15:56.439 "type": "rebuild", 00:15:56.439 "target": "spare", 00:15:56.439 "progress": { 00:15:56.439 "blocks": 10240, 00:15:56.439 "percent": 15 00:15:56.439 } 00:15:56.439 }, 00:15:56.439 "base_bdevs_list": [ 00:15:56.439 { 00:15:56.439 "name": "spare", 00:15:56.439 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:15:56.439 "is_configured": true, 00:15:56.439 "data_offset": 0, 00:15:56.439 "data_size": 65536 00:15:56.439 }, 00:15:56.439 { 00:15:56.439 "name": "BaseBdev2", 00:15:56.439 "uuid": "312132bc-6b14-53e4-8e48-e6cf234c59f9", 00:15:56.439 "is_configured": true, 00:15:56.439 "data_offset": 0, 00:15:56.439 "data_size": 65536 00:15:56.439 }, 00:15:56.439 { 00:15:56.439 "name": "BaseBdev3", 00:15:56.439 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:56.439 "is_configured": true, 00:15:56.439 "data_offset": 0, 00:15:56.439 "data_size": 65536 00:15:56.439 }, 00:15:56.439 { 00:15:56.439 "name": "BaseBdev4", 00:15:56.439 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:56.439 "is_configured": true, 00:15:56.439 "data_offset": 0, 00:15:56.439 "data_size": 65536 00:15:56.439 } 00:15:56.439 ] 00:15:56.439 }' 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.439 09:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 [2024-10-11 09:49:40.939482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.439 [2024-10-11 09:49:41.044089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:56.698 [2024-10-11 09:49:41.143963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.698 [2024-10-11 09:49:41.146932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.698 [2024-10-11 09:49:41.146983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.698 [2024-10-11 09:49:41.146998] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:56.698 [2024-10-11 09:49:41.177116] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.698 "name": "raid_bdev1", 00:15:56.698 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:56.698 "strip_size_kb": 0, 00:15:56.698 "state": "online", 00:15:56.698 "raid_level": "raid1", 00:15:56.698 "superblock": false, 00:15:56.698 "num_base_bdevs": 4, 00:15:56.698 "num_base_bdevs_discovered": 3, 00:15:56.698 "num_base_bdevs_operational": 3, 00:15:56.698 "base_bdevs_list": [ 00:15:56.698 { 00:15:56.698 "name": null, 00:15:56.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.698 "is_configured": false, 00:15:56.698 "data_offset": 0, 00:15:56.698 "data_size": 65536 00:15:56.698 }, 00:15:56.698 { 00:15:56.698 "name": "BaseBdev2", 00:15:56.698 "uuid": "312132bc-6b14-53e4-8e48-e6cf234c59f9", 00:15:56.698 "is_configured": true, 00:15:56.698 "data_offset": 0, 00:15:56.698 "data_size": 65536 00:15:56.698 }, 00:15:56.698 { 00:15:56.698 "name": "BaseBdev3", 00:15:56.698 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:56.698 "is_configured": true, 00:15:56.698 "data_offset": 0, 00:15:56.698 "data_size": 65536 00:15:56.698 }, 00:15:56.698 { 00:15:56.698 "name": "BaseBdev4", 00:15:56.698 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:56.698 "is_configured": true, 00:15:56.698 "data_offset": 0, 00:15:56.698 "data_size": 65536 00:15:56.698 } 00:15:56.698 ] 00:15:56.698 }' 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.698 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.216 129.50 IOPS, 388.50 MiB/s [2024-10-11T09:49:41.848Z] 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.216 "name": "raid_bdev1", 00:15:57.216 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:57.216 "strip_size_kb": 0, 00:15:57.216 "state": "online", 00:15:57.216 "raid_level": "raid1", 00:15:57.216 "superblock": false, 00:15:57.216 "num_base_bdevs": 4, 00:15:57.216 "num_base_bdevs_discovered": 3, 00:15:57.216 "num_base_bdevs_operational": 3, 00:15:57.216 "base_bdevs_list": [ 00:15:57.216 { 00:15:57.216 "name": null, 00:15:57.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.216 "is_configured": false, 00:15:57.216 "data_offset": 0, 00:15:57.216 "data_size": 65536 00:15:57.216 }, 00:15:57.216 { 00:15:57.216 "name": "BaseBdev2", 00:15:57.216 "uuid": "312132bc-6b14-53e4-8e48-e6cf234c59f9", 00:15:57.216 "is_configured": true, 00:15:57.216 "data_offset": 0, 00:15:57.216 "data_size": 65536 00:15:57.216 }, 00:15:57.216 { 00:15:57.216 "name": "BaseBdev3", 00:15:57.216 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:57.216 "is_configured": true, 00:15:57.216 "data_offset": 0, 00:15:57.216 "data_size": 65536 00:15:57.216 }, 00:15:57.216 { 00:15:57.216 "name": "BaseBdev4", 00:15:57.216 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:57.216 "is_configured": true, 00:15:57.216 "data_offset": 0, 00:15:57.216 "data_size": 65536 00:15:57.216 } 00:15:57.216 ] 00:15:57.216 }' 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.216 [2024-10-11 09:49:41.746493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.216 09:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.216 [2024-10-11 09:49:41.810565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:57.217 [2024-10-11 09:49:41.812795] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.476 [2024-10-11 09:49:41.921302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:57.476 [2024-10-11 09:49:41.922867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:57.735 [2024-10-11 09:49:42.147830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:57.735 [2024-10-11 09:49:42.148274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:57.995 125.67 IOPS, 377.00 MiB/s [2024-10-11T09:49:42.627Z] [2024-10-11 09:49:42.606136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.253 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.253 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.253 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.253 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.254 "name": "raid_bdev1", 00:15:58.254 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:58.254 "strip_size_kb": 0, 00:15:58.254 "state": "online", 00:15:58.254 "raid_level": "raid1", 00:15:58.254 "superblock": false, 00:15:58.254 "num_base_bdevs": 4, 00:15:58.254 "num_base_bdevs_discovered": 4, 00:15:58.254 "num_base_bdevs_operational": 4, 00:15:58.254 "process": { 00:15:58.254 "type": "rebuild", 00:15:58.254 "target": "spare", 00:15:58.254 "progress": { 00:15:58.254 "blocks": 12288, 00:15:58.254 "percent": 18 00:15:58.254 } 00:15:58.254 }, 00:15:58.254 "base_bdevs_list": [ 00:15:58.254 { 00:15:58.254 "name": "spare", 00:15:58.254 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:15:58.254 "is_configured": true, 00:15:58.254 "data_offset": 0, 00:15:58.254 "data_size": 65536 00:15:58.254 }, 00:15:58.254 { 00:15:58.254 "name": "BaseBdev2", 00:15:58.254 "uuid": "312132bc-6b14-53e4-8e48-e6cf234c59f9", 00:15:58.254 "is_configured": true, 00:15:58.254 "data_offset": 0, 00:15:58.254 "data_size": 65536 00:15:58.254 }, 00:15:58.254 { 00:15:58.254 "name": "BaseBdev3", 00:15:58.254 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:58.254 "is_configured": true, 00:15:58.254 "data_offset": 0, 00:15:58.254 "data_size": 65536 00:15:58.254 }, 00:15:58.254 { 00:15:58.254 "name": "BaseBdev4", 00:15:58.254 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:58.254 "is_configured": true, 00:15:58.254 "data_offset": 0, 00:15:58.254 "data_size": 65536 00:15:58.254 } 00:15:58.254 ] 00:15:58.254 }' 00:15:58.254 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.512 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.512 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.512 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.512 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:58.512 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:58.512 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.513 [2024-10-11 09:49:42.946438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:58.513 [2024-10-11 09:49:42.962362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:58.513 [2024-10-11 09:49:42.962732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:58.513 [2024-10-11 09:49:42.969536] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:58.513 [2024-10-11 09:49:42.969596] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:58.513 [2024-10-11 09:49:42.978833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.513 09:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.513 "name": "raid_bdev1", 00:15:58.513 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:58.513 "strip_size_kb": 0, 00:15:58.513 "state": "online", 00:15:58.513 "raid_level": "raid1", 00:15:58.513 "superblock": false, 00:15:58.513 "num_base_bdevs": 4, 00:15:58.513 "num_base_bdevs_discovered": 3, 00:15:58.513 "num_base_bdevs_operational": 3, 00:15:58.513 "process": { 00:15:58.513 "type": "rebuild", 00:15:58.513 "target": "spare", 00:15:58.513 "progress": { 00:15:58.513 "blocks": 16384, 00:15:58.513 "percent": 25 00:15:58.513 } 00:15:58.513 }, 00:15:58.513 "base_bdevs_list": [ 00:15:58.513 { 00:15:58.513 "name": "spare", 00:15:58.513 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 0, 00:15:58.513 "data_size": 65536 00:15:58.513 }, 00:15:58.513 { 00:15:58.513 "name": null, 00:15:58.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.513 "is_configured": false, 00:15:58.513 "data_offset": 0, 00:15:58.513 "data_size": 65536 00:15:58.513 }, 00:15:58.513 { 00:15:58.513 "name": "BaseBdev3", 00:15:58.513 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 0, 00:15:58.513 "data_size": 65536 00:15:58.513 }, 00:15:58.513 { 00:15:58.513 "name": "BaseBdev4", 00:15:58.513 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 0, 00:15:58.513 "data_size": 65536 00:15:58.513 } 00:15:58.513 ] 00:15:58.513 }' 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=499 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.513 09:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.772 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.772 "name": "raid_bdev1", 00:15:58.772 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:58.772 "strip_size_kb": 0, 00:15:58.772 "state": "online", 00:15:58.772 "raid_level": "raid1", 00:15:58.772 "superblock": false, 00:15:58.772 "num_base_bdevs": 4, 00:15:58.772 "num_base_bdevs_discovered": 3, 00:15:58.772 "num_base_bdevs_operational": 3, 00:15:58.772 "process": { 00:15:58.772 "type": "rebuild", 00:15:58.772 "target": "spare", 00:15:58.772 "progress": { 00:15:58.772 "blocks": 18432, 00:15:58.773 "percent": 28 00:15:58.773 } 00:15:58.773 }, 00:15:58.773 "base_bdevs_list": [ 00:15:58.773 { 00:15:58.773 "name": "spare", 00:15:58.773 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:15:58.773 "is_configured": true, 00:15:58.773 "data_offset": 0, 00:15:58.773 "data_size": 65536 00:15:58.773 }, 00:15:58.773 { 00:15:58.773 "name": null, 00:15:58.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.773 "is_configured": false, 00:15:58.773 "data_offset": 0, 00:15:58.773 "data_size": 65536 00:15:58.773 }, 00:15:58.773 { 00:15:58.773 "name": "BaseBdev3", 00:15:58.773 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:58.773 "is_configured": true, 00:15:58.773 "data_offset": 0, 00:15:58.773 "data_size": 65536 00:15:58.773 }, 00:15:58.773 { 00:15:58.773 "name": "BaseBdev4", 00:15:58.773 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:58.773 "is_configured": true, 00:15:58.773 "data_offset": 0, 00:15:58.773 "data_size": 65536 00:15:58.773 } 00:15:58.773 ] 00:15:58.773 }' 00:15:58.773 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.773 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.773 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.773 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.773 09:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.773 [2024-10-11 09:49:43.348945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:58.773 [2024-10-11 09:49:43.349268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:59.290 112.25 IOPS, 336.75 MiB/s [2024-10-11T09:49:43.922Z] [2024-10-11 09:49:43.676255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:59.290 [2024-10-11 09:49:43.676916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:59.549 [2024-10-11 09:49:44.053422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.808 "name": "raid_bdev1", 00:15:59.808 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:15:59.808 "strip_size_kb": 0, 00:15:59.808 "state": "online", 00:15:59.808 "raid_level": "raid1", 00:15:59.808 "superblock": false, 00:15:59.808 "num_base_bdevs": 4, 00:15:59.808 "num_base_bdevs_discovered": 3, 00:15:59.808 "num_base_bdevs_operational": 3, 00:15:59.808 "process": { 00:15:59.808 "type": "rebuild", 00:15:59.808 "target": "spare", 00:15:59.808 "progress": { 00:15:59.808 "blocks": 34816, 00:15:59.808 "percent": 53 00:15:59.808 } 00:15:59.808 }, 00:15:59.808 "base_bdevs_list": [ 00:15:59.808 { 00:15:59.808 "name": "spare", 00:15:59.808 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:15:59.808 "is_configured": true, 00:15:59.808 "data_offset": 0, 00:15:59.808 "data_size": 65536 00:15:59.808 }, 00:15:59.808 { 00:15:59.808 "name": null, 00:15:59.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.808 "is_configured": false, 00:15:59.808 "data_offset": 0, 00:15:59.808 "data_size": 65536 00:15:59.808 }, 00:15:59.808 { 00:15:59.808 "name": "BaseBdev3", 00:15:59.808 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:15:59.808 "is_configured": true, 00:15:59.808 "data_offset": 0, 00:15:59.808 "data_size": 65536 00:15:59.808 }, 00:15:59.808 { 00:15:59.808 "name": "BaseBdev4", 00:15:59.808 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:15:59.808 "is_configured": true, 00:15:59.808 "data_offset": 0, 00:15:59.808 "data_size": 65536 00:15:59.808 } 00:15:59.808 ] 00:15:59.808 }' 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.808 09:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.376 101.20 IOPS, 303.60 MiB/s [2024-10-11T09:49:45.008Z] [2024-10-11 09:49:44.945514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.944 "name": "raid_bdev1", 00:16:00.944 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:16:00.944 "strip_size_kb": 0, 00:16:00.944 "state": "online", 00:16:00.944 "raid_level": "raid1", 00:16:00.944 "superblock": false, 00:16:00.944 "num_base_bdevs": 4, 00:16:00.944 "num_base_bdevs_discovered": 3, 00:16:00.944 "num_base_bdevs_operational": 3, 00:16:00.944 "process": { 00:16:00.944 "type": "rebuild", 00:16:00.944 "target": "spare", 00:16:00.944 "progress": { 00:16:00.944 "blocks": 53248, 00:16:00.944 "percent": 81 00:16:00.944 } 00:16:00.944 }, 00:16:00.944 "base_bdevs_list": [ 00:16:00.944 { 00:16:00.944 "name": "spare", 00:16:00.944 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:16:00.944 "is_configured": true, 00:16:00.944 "data_offset": 0, 00:16:00.944 "data_size": 65536 00:16:00.944 }, 00:16:00.944 { 00:16:00.944 "name": null, 00:16:00.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.944 "is_configured": false, 00:16:00.944 "data_offset": 0, 00:16:00.944 "data_size": 65536 00:16:00.944 }, 00:16:00.944 { 00:16:00.944 "name": "BaseBdev3", 00:16:00.944 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:16:00.944 "is_configured": true, 00:16:00.944 "data_offset": 0, 00:16:00.944 "data_size": 65536 00:16:00.944 }, 00:16:00.944 { 00:16:00.944 "name": "BaseBdev4", 00:16:00.944 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:16:00.944 "is_configured": true, 00:16:00.944 "data_offset": 0, 00:16:00.944 "data_size": 65536 00:16:00.944 } 00:16:00.944 ] 00:16:00.944 }' 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.944 93.17 IOPS, 279.50 MiB/s [2024-10-11T09:49:45.576Z] 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.944 09:49:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.201 [2024-10-11 09:49:45.632271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:01.458 [2024-10-11 09:49:46.061292] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:01.716 [2024-10-11 09:49:46.161087] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:01.716 [2024-10-11 09:49:46.164517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.974 85.29 IOPS, 255.86 MiB/s [2024-10-11T09:49:46.606Z] 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.974 "name": "raid_bdev1", 00:16:01.974 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:16:01.974 "strip_size_kb": 0, 00:16:01.974 "state": "online", 00:16:01.974 "raid_level": "raid1", 00:16:01.974 "superblock": false, 00:16:01.974 "num_base_bdevs": 4, 00:16:01.974 "num_base_bdevs_discovered": 3, 00:16:01.974 "num_base_bdevs_operational": 3, 00:16:01.974 "base_bdevs_list": [ 00:16:01.974 { 00:16:01.974 "name": "spare", 00:16:01.974 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:16:01.974 "is_configured": true, 00:16:01.974 "data_offset": 0, 00:16:01.974 "data_size": 65536 00:16:01.974 }, 00:16:01.974 { 00:16:01.974 "name": null, 00:16:01.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.974 "is_configured": false, 00:16:01.974 "data_offset": 0, 00:16:01.974 "data_size": 65536 00:16:01.974 }, 00:16:01.974 { 00:16:01.974 "name": "BaseBdev3", 00:16:01.974 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:16:01.974 "is_configured": true, 00:16:01.974 "data_offset": 0, 00:16:01.974 "data_size": 65536 00:16:01.974 }, 00:16:01.974 { 00:16:01.974 "name": "BaseBdev4", 00:16:01.974 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:16:01.974 "is_configured": true, 00:16:01.974 "data_offset": 0, 00:16:01.974 "data_size": 65536 00:16:01.974 } 00:16:01.974 ] 00:16:01.974 }' 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:01.974 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.231 "name": "raid_bdev1", 00:16:02.231 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:16:02.231 "strip_size_kb": 0, 00:16:02.231 "state": "online", 00:16:02.231 "raid_level": "raid1", 00:16:02.231 "superblock": false, 00:16:02.231 "num_base_bdevs": 4, 00:16:02.231 "num_base_bdevs_discovered": 3, 00:16:02.231 "num_base_bdevs_operational": 3, 00:16:02.231 "base_bdevs_list": [ 00:16:02.231 { 00:16:02.231 "name": "spare", 00:16:02.231 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:16:02.231 "is_configured": true, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 }, 00:16:02.231 { 00:16:02.231 "name": null, 00:16:02.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.231 "is_configured": false, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 }, 00:16:02.231 { 00:16:02.231 "name": "BaseBdev3", 00:16:02.231 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:16:02.231 "is_configured": true, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 }, 00:16:02.231 { 00:16:02.231 "name": "BaseBdev4", 00:16:02.231 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:16:02.231 "is_configured": true, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 } 00:16:02.231 ] 00:16:02.231 }' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.231 "name": "raid_bdev1", 00:16:02.231 "uuid": "3c0b314b-4797-45c3-bd69-05a4818370be", 00:16:02.231 "strip_size_kb": 0, 00:16:02.231 "state": "online", 00:16:02.231 "raid_level": "raid1", 00:16:02.231 "superblock": false, 00:16:02.231 "num_base_bdevs": 4, 00:16:02.231 "num_base_bdevs_discovered": 3, 00:16:02.231 "num_base_bdevs_operational": 3, 00:16:02.231 "base_bdevs_list": [ 00:16:02.231 { 00:16:02.231 "name": "spare", 00:16:02.231 "uuid": "c5d31750-45de-5cb0-835f-0313b9ef7d08", 00:16:02.231 "is_configured": true, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 }, 00:16:02.231 { 00:16:02.231 "name": null, 00:16:02.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.231 "is_configured": false, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 }, 00:16:02.231 { 00:16:02.231 "name": "BaseBdev3", 00:16:02.231 "uuid": "16769917-d8a4-5a22-be1b-52bfdbbc7b1b", 00:16:02.231 "is_configured": true, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 }, 00:16:02.231 { 00:16:02.231 "name": "BaseBdev4", 00:16:02.231 "uuid": "6d2192ee-8668-5c40-aa01-668a442170ac", 00:16:02.231 "is_configured": true, 00:16:02.231 "data_offset": 0, 00:16:02.231 "data_size": 65536 00:16:02.231 } 00:16:02.231 ] 00:16:02.231 }' 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.231 09:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.796 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.796 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.796 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.796 [2024-10-11 09:49:47.180408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.796 [2024-10-11 09:49:47.180443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.796 00:16:02.796 Latency(us) 00:16:02.796 [2024-10-11T09:49:47.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.796 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:02.796 raid_bdev1 : 7.85 79.52 238.56 0.00 0.00 17764.19 330.90 119968.08 00:16:02.796 [2024-10-11T09:49:47.428Z] =================================================================================================================== 00:16:02.796 [2024-10-11T09:49:47.428Z] Total : 79.52 238.56 0.00 0.00 17764.19 330.90 119968.08 00:16:02.796 [2024-10-11 09:49:47.292104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.796 [2024-10-11 09:49:47.292256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.796 [2024-10-11 09:49:47.292406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.796 [2024-10-11 09:49:47.292467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:02.796 { 00:16:02.796 "results": [ 00:16:02.796 { 00:16:02.796 "job": "raid_bdev1", 00:16:02.796 "core_mask": "0x1", 00:16:02.796 "workload": "randrw", 00:16:02.796 "percentage": 50, 00:16:02.796 "status": "finished", 00:16:02.796 "queue_depth": 2, 00:16:02.796 "io_size": 3145728, 00:16:02.796 "runtime": 7.846931, 00:16:02.796 "iops": 79.52153523460318, 00:16:02.796 "mibps": 238.56460570380955, 00:16:02.796 "io_failed": 0, 00:16:02.796 "io_timeout": 0, 00:16:02.796 "avg_latency_us": 17764.189989922743, 00:16:02.796 "min_latency_us": 330.89956331877727, 00:16:02.796 "max_latency_us": 119968.08384279476 00:16:02.796 } 00:16:02.796 ], 00:16:02.797 "core_count": 1 00:16:02.797 } 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.797 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:03.055 /dev/nbd0 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.055 1+0 records in 00:16:03.055 1+0 records out 00:16:03.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532131 s, 7.7 MB/s 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:03.055 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.056 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:03.314 /dev/nbd1 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.314 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.314 1+0 records in 00:16:03.314 1+0 records out 00:16:03.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216545 s, 18.9 MB/s 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.315 09:49:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.573 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.830 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:04.088 /dev/nbd1 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.088 1+0 records in 00:16:04.088 1+0 records out 00:16:04.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253599 s, 16.2 MB/s 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:04.088 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.089 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.348 09:49:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79334 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 79334 ']' 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 79334 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79334 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.608 killing process with pid 79334 00:16:04.608 Received shutdown signal, test time was about 9.713311 seconds 00:16:04.608 00:16:04.608 Latency(us) 00:16:04.608 [2024-10-11T09:49:49.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.608 [2024-10-11T09:49:49.240Z] =================================================================================================================== 00:16:04.608 [2024-10-11T09:49:49.240Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79334' 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 79334 00:16:04.608 [2024-10-11 09:49:49.130976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.608 09:49:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 79334 00:16:05.177 [2024-10-11 09:49:49.562979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:06.556 00:16:06.556 real 0m13.222s 00:16:06.556 user 0m16.574s 00:16:06.556 sys 0m1.750s 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.556 ************************************ 00:16:06.556 END TEST raid_rebuild_test_io 00:16:06.556 ************************************ 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.556 09:49:50 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:06.556 09:49:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:06.556 09:49:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.556 09:49:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.556 ************************************ 00:16:06.556 START TEST raid_rebuild_test_sb_io 00:16:06.556 ************************************ 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79743 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79743 00:16:06.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79743 ']' 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.556 09:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.556 [2024-10-11 09:49:50.914362] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:16:06.556 [2024-10-11 09:49:50.914609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79743 ] 00:16:06.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:06.556 Zero copy mechanism will not be used. 00:16:06.556 [2024-10-11 09:49:51.077338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.816 [2024-10-11 09:49:51.200568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.816 [2024-10-11 09:49:51.430896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.816 [2024-10-11 09:49:51.431087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.385 BaseBdev1_malloc 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.385 [2024-10-11 09:49:51.858115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:07.385 [2024-10-11 09:49:51.858205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.385 [2024-10-11 09:49:51.858233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.385 [2024-10-11 09:49:51.858246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.385 [2024-10-11 09:49:51.860698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.385 [2024-10-11 09:49:51.860813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:07.385 BaseBdev1 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.385 BaseBdev2_malloc 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.385 [2024-10-11 09:49:51.921854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:07.385 [2024-10-11 09:49:51.921974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.385 [2024-10-11 09:49:51.922014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.385 [2024-10-11 09:49:51.922026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.385 [2024-10-11 09:49:51.924529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.385 [2024-10-11 09:49:51.924576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:07.385 BaseBdev2 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.385 BaseBdev3_malloc 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.385 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.386 [2024-10-11 09:49:51.994031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:07.386 [2024-10-11 09:49:51.994095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.386 [2024-10-11 09:49:51.994116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:07.386 [2024-10-11 09:49:51.994126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.386 [2024-10-11 09:49:51.996438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.386 [2024-10-11 09:49:51.996481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:07.386 BaseBdev3 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.386 09:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 BaseBdev4_malloc 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 [2024-10-11 09:49:52.055352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:07.645 [2024-10-11 09:49:52.055432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.645 [2024-10-11 09:49:52.055459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:07.645 [2024-10-11 09:49:52.055471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.645 [2024-10-11 09:49:52.057939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.645 [2024-10-11 09:49:52.058078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:07.645 BaseBdev4 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 spare_malloc 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 spare_delay 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 [2024-10-11 09:49:52.130678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.645 [2024-10-11 09:49:52.130879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.645 [2024-10-11 09:49:52.130916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:07.645 [2024-10-11 09:49:52.130928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.645 [2024-10-11 09:49:52.133375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.645 [2024-10-11 09:49:52.133423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.645 spare 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 [2024-10-11 09:49:52.142750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.645 [2024-10-11 09:49:52.144722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.645 [2024-10-11 09:49:52.144813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.645 [2024-10-11 09:49:52.144881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.645 [2024-10-11 09:49:52.145109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:07.645 [2024-10-11 09:49:52.145126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.645 [2024-10-11 09:49:52.145432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:07.645 [2024-10-11 09:49:52.145645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:07.645 [2024-10-11 09:49:52.145656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:07.645 [2024-10-11 09:49:52.145858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.645 "name": "raid_bdev1", 00:16:07.645 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:07.645 "strip_size_kb": 0, 00:16:07.645 "state": "online", 00:16:07.645 "raid_level": "raid1", 00:16:07.645 "superblock": true, 00:16:07.645 "num_base_bdevs": 4, 00:16:07.645 "num_base_bdevs_discovered": 4, 00:16:07.645 "num_base_bdevs_operational": 4, 00:16:07.645 "base_bdevs_list": [ 00:16:07.645 { 00:16:07.645 "name": "BaseBdev1", 00:16:07.645 "uuid": "74159cf0-b7d7-5116-85ce-aefc5707c63f", 00:16:07.645 "is_configured": true, 00:16:07.645 "data_offset": 2048, 00:16:07.645 "data_size": 63488 00:16:07.645 }, 00:16:07.645 { 00:16:07.645 "name": "BaseBdev2", 00:16:07.645 "uuid": "f5a8a5ad-cfb2-5a76-96c0-37e036052318", 00:16:07.645 "is_configured": true, 00:16:07.645 "data_offset": 2048, 00:16:07.645 "data_size": 63488 00:16:07.645 }, 00:16:07.645 { 00:16:07.645 "name": "BaseBdev3", 00:16:07.645 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:07.645 "is_configured": true, 00:16:07.645 "data_offset": 2048, 00:16:07.645 "data_size": 63488 00:16:07.645 }, 00:16:07.645 { 00:16:07.645 "name": "BaseBdev4", 00:16:07.645 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:07.645 "is_configured": true, 00:16:07.645 "data_offset": 2048, 00:16:07.645 "data_size": 63488 00:16:07.645 } 00:16:07.645 ] 00:16:07.645 }' 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.645 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.214 [2024-10-11 09:49:52.598300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:08.214 [2024-10-11 09:49:52.677784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.214 "name": "raid_bdev1", 00:16:08.214 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:08.214 "strip_size_kb": 0, 00:16:08.214 "state": "online", 00:16:08.214 "raid_level": "raid1", 00:16:08.214 "superblock": true, 00:16:08.214 "num_base_bdevs": 4, 00:16:08.214 "num_base_bdevs_discovered": 3, 00:16:08.214 "num_base_bdevs_operational": 3, 00:16:08.214 "base_bdevs_list": [ 00:16:08.214 { 00:16:08.214 "name": null, 00:16:08.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.214 "is_configured": false, 00:16:08.214 "data_offset": 0, 00:16:08.214 "data_size": 63488 00:16:08.214 }, 00:16:08.214 { 00:16:08.214 "name": "BaseBdev2", 00:16:08.214 "uuid": "f5a8a5ad-cfb2-5a76-96c0-37e036052318", 00:16:08.214 "is_configured": true, 00:16:08.214 "data_offset": 2048, 00:16:08.214 "data_size": 63488 00:16:08.214 }, 00:16:08.214 { 00:16:08.214 "name": "BaseBdev3", 00:16:08.214 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:08.214 "is_configured": true, 00:16:08.214 "data_offset": 2048, 00:16:08.214 "data_size": 63488 00:16:08.214 }, 00:16:08.214 { 00:16:08.214 "name": "BaseBdev4", 00:16:08.214 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:08.214 "is_configured": true, 00:16:08.214 "data_offset": 2048, 00:16:08.214 "data_size": 63488 00:16:08.214 } 00:16:08.214 ] 00:16:08.214 }' 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.214 09:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.214 [2024-10-11 09:49:52.795616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:08.214 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:08.214 Zero copy mechanism will not be used. 00:16:08.214 Running I/O for 60 seconds... 00:16:08.783 09:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.783 09:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.783 09:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.783 [2024-10-11 09:49:53.135443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.783 09:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.783 09:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:08.783 [2024-10-11 09:49:53.175205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:08.783 [2024-10-11 09:49:53.177403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.783 [2024-10-11 09:49:53.306562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:09.042 [2024-10-11 09:49:53.442363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:09.302 199.00 IOPS, 597.00 MiB/s [2024-10-11T09:49:53.934Z] [2024-10-11 09:49:53.810136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:09.561 [2024-10-11 09:49:53.947029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:09.561 [2024-10-11 09:49:53.947967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.561 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.821 "name": "raid_bdev1", 00:16:09.821 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:09.821 "strip_size_kb": 0, 00:16:09.821 "state": "online", 00:16:09.821 "raid_level": "raid1", 00:16:09.821 "superblock": true, 00:16:09.821 "num_base_bdevs": 4, 00:16:09.821 "num_base_bdevs_discovered": 4, 00:16:09.821 "num_base_bdevs_operational": 4, 00:16:09.821 "process": { 00:16:09.821 "type": "rebuild", 00:16:09.821 "target": "spare", 00:16:09.821 "progress": { 00:16:09.821 "blocks": 12288, 00:16:09.821 "percent": 19 00:16:09.821 } 00:16:09.821 }, 00:16:09.821 "base_bdevs_list": [ 00:16:09.821 { 00:16:09.821 "name": "spare", 00:16:09.821 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:09.821 "is_configured": true, 00:16:09.821 "data_offset": 2048, 00:16:09.821 "data_size": 63488 00:16:09.821 }, 00:16:09.821 { 00:16:09.821 "name": "BaseBdev2", 00:16:09.821 "uuid": "f5a8a5ad-cfb2-5a76-96c0-37e036052318", 00:16:09.821 "is_configured": true, 00:16:09.821 "data_offset": 2048, 00:16:09.821 "data_size": 63488 00:16:09.821 }, 00:16:09.821 { 00:16:09.821 "name": "BaseBdev3", 00:16:09.821 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:09.821 "is_configured": true, 00:16:09.821 "data_offset": 2048, 00:16:09.821 "data_size": 63488 00:16:09.821 }, 00:16:09.821 { 00:16:09.821 "name": "BaseBdev4", 00:16:09.821 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:09.821 "is_configured": true, 00:16:09.821 "data_offset": 2048, 00:16:09.821 "data_size": 63488 00:16:09.821 } 00:16:09.821 ] 00:16:09.821 }' 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.821 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.821 [2024-10-11 09:49:54.324838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.821 [2024-10-11 09:49:54.406994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:10.080 [2024-10-11 09:49:54.509961] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.080 [2024-10-11 09:49:54.521116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.080 [2024-10-11 09:49:54.521171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.080 [2024-10-11 09:49:54.521186] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.080 [2024-10-11 09:49:54.555965] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.080 "name": "raid_bdev1", 00:16:10.080 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:10.080 "strip_size_kb": 0, 00:16:10.080 "state": "online", 00:16:10.080 "raid_level": "raid1", 00:16:10.080 "superblock": true, 00:16:10.080 "num_base_bdevs": 4, 00:16:10.080 "num_base_bdevs_discovered": 3, 00:16:10.080 "num_base_bdevs_operational": 3, 00:16:10.080 "base_bdevs_list": [ 00:16:10.080 { 00:16:10.080 "name": null, 00:16:10.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.080 "is_configured": false, 00:16:10.080 "data_offset": 0, 00:16:10.080 "data_size": 63488 00:16:10.080 }, 00:16:10.080 { 00:16:10.080 "name": "BaseBdev2", 00:16:10.080 "uuid": "f5a8a5ad-cfb2-5a76-96c0-37e036052318", 00:16:10.080 "is_configured": true, 00:16:10.080 "data_offset": 2048, 00:16:10.080 "data_size": 63488 00:16:10.080 }, 00:16:10.080 { 00:16:10.080 "name": "BaseBdev3", 00:16:10.080 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:10.080 "is_configured": true, 00:16:10.080 "data_offset": 2048, 00:16:10.080 "data_size": 63488 00:16:10.080 }, 00:16:10.080 { 00:16:10.080 "name": "BaseBdev4", 00:16:10.080 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:10.080 "is_configured": true, 00:16:10.080 "data_offset": 2048, 00:16:10.080 "data_size": 63488 00:16:10.080 } 00:16:10.080 ] 00:16:10.080 }' 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.080 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.339 157.50 IOPS, 472.50 MiB/s [2024-10-11T09:49:54.971Z] 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.339 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.339 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.339 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.339 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.597 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.597 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.597 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.597 09:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.597 "name": "raid_bdev1", 00:16:10.597 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:10.597 "strip_size_kb": 0, 00:16:10.597 "state": "online", 00:16:10.597 "raid_level": "raid1", 00:16:10.597 "superblock": true, 00:16:10.597 "num_base_bdevs": 4, 00:16:10.597 "num_base_bdevs_discovered": 3, 00:16:10.597 "num_base_bdevs_operational": 3, 00:16:10.597 "base_bdevs_list": [ 00:16:10.597 { 00:16:10.597 "name": null, 00:16:10.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.597 "is_configured": false, 00:16:10.597 "data_offset": 0, 00:16:10.597 "data_size": 63488 00:16:10.597 }, 00:16:10.597 { 00:16:10.597 "name": "BaseBdev2", 00:16:10.597 "uuid": "f5a8a5ad-cfb2-5a76-96c0-37e036052318", 00:16:10.597 "is_configured": true, 00:16:10.597 "data_offset": 2048, 00:16:10.597 "data_size": 63488 00:16:10.597 }, 00:16:10.597 { 00:16:10.597 "name": "BaseBdev3", 00:16:10.597 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:10.597 "is_configured": true, 00:16:10.597 "data_offset": 2048, 00:16:10.597 "data_size": 63488 00:16:10.597 }, 00:16:10.597 { 00:16:10.597 "name": "BaseBdev4", 00:16:10.597 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:10.597 "is_configured": true, 00:16:10.597 "data_offset": 2048, 00:16:10.597 "data_size": 63488 00:16:10.597 } 00:16:10.597 ] 00:16:10.597 }' 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.597 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.598 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.598 [2024-10-11 09:49:55.112178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.598 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.598 09:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:10.598 [2024-10-11 09:49:55.193164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:10.598 [2024-10-11 09:49:55.195419] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.857 [2024-10-11 09:49:55.314245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:10.857 [2024-10-11 09:49:55.315882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:11.115 [2024-10-11 09:49:55.520661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:11.115 [2024-10-11 09:49:55.521682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:11.374 148.00 IOPS, 444.00 MiB/s [2024-10-11T09:49:56.006Z] [2024-10-11 09:49:55.928950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:11.374 [2024-10-11 09:49:55.929702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:11.633 [2024-10-11 09:49:56.054209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.633 "name": "raid_bdev1", 00:16:11.633 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:11.633 "strip_size_kb": 0, 00:16:11.633 "state": "online", 00:16:11.633 "raid_level": "raid1", 00:16:11.633 "superblock": true, 00:16:11.633 "num_base_bdevs": 4, 00:16:11.633 "num_base_bdevs_discovered": 4, 00:16:11.633 "num_base_bdevs_operational": 4, 00:16:11.633 "process": { 00:16:11.633 "type": "rebuild", 00:16:11.633 "target": "spare", 00:16:11.633 "progress": { 00:16:11.633 "blocks": 10240, 00:16:11.633 "percent": 16 00:16:11.633 } 00:16:11.633 }, 00:16:11.633 "base_bdevs_list": [ 00:16:11.633 { 00:16:11.633 "name": "spare", 00:16:11.633 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:11.633 "is_configured": true, 00:16:11.633 "data_offset": 2048, 00:16:11.633 "data_size": 63488 00:16:11.633 }, 00:16:11.633 { 00:16:11.633 "name": "BaseBdev2", 00:16:11.633 "uuid": "f5a8a5ad-cfb2-5a76-96c0-37e036052318", 00:16:11.633 "is_configured": true, 00:16:11.633 "data_offset": 2048, 00:16:11.633 "data_size": 63488 00:16:11.633 }, 00:16:11.633 { 00:16:11.633 "name": "BaseBdev3", 00:16:11.633 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:11.633 "is_configured": true, 00:16:11.633 "data_offset": 2048, 00:16:11.633 "data_size": 63488 00:16:11.633 }, 00:16:11.633 { 00:16:11.633 "name": "BaseBdev4", 00:16:11.633 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:11.633 "is_configured": true, 00:16:11.633 "data_offset": 2048, 00:16:11.633 "data_size": 63488 00:16:11.633 } 00:16:11.633 ] 00:16:11.633 }' 00:16:11.633 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:11.893 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.893 [2024-10-11 09:49:56.323057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:11.893 [2024-10-11 09:49:56.376696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:11.893 [2024-10-11 09:49:56.499850] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:11.893 [2024-10-11 09:49:56.499992] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:11.893 [2024-10-11 09:49:56.503277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.893 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.153 "name": "raid_bdev1", 00:16:12.153 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:12.153 "strip_size_kb": 0, 00:16:12.153 "state": "online", 00:16:12.153 "raid_level": "raid1", 00:16:12.153 "superblock": true, 00:16:12.153 "num_base_bdevs": 4, 00:16:12.153 "num_base_bdevs_discovered": 3, 00:16:12.153 "num_base_bdevs_operational": 3, 00:16:12.153 "process": { 00:16:12.153 "type": "rebuild", 00:16:12.153 "target": "spare", 00:16:12.153 "progress": { 00:16:12.153 "blocks": 14336, 00:16:12.153 "percent": 22 00:16:12.153 } 00:16:12.153 }, 00:16:12.153 "base_bdevs_list": [ 00:16:12.153 { 00:16:12.153 "name": "spare", 00:16:12.153 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:12.153 "is_configured": true, 00:16:12.153 "data_offset": 2048, 00:16:12.153 "data_size": 63488 00:16:12.153 }, 00:16:12.153 { 00:16:12.153 "name": null, 00:16:12.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.153 "is_configured": false, 00:16:12.153 "data_offset": 0, 00:16:12.153 "data_size": 63488 00:16:12.153 }, 00:16:12.153 { 00:16:12.153 "name": "BaseBdev3", 00:16:12.153 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:12.153 "is_configured": true, 00:16:12.153 "data_offset": 2048, 00:16:12.153 "data_size": 63488 00:16:12.153 }, 00:16:12.153 { 00:16:12.153 "name": "BaseBdev4", 00:16:12.153 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:12.153 "is_configured": true, 00:16:12.153 "data_offset": 2048, 00:16:12.153 "data_size": 63488 00:16:12.153 } 00:16:12.153 ] 00:16:12.153 }' 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=512 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.153 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.154 "name": "raid_bdev1", 00:16:12.154 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:12.154 "strip_size_kb": 0, 00:16:12.154 "state": "online", 00:16:12.154 "raid_level": "raid1", 00:16:12.154 "superblock": true, 00:16:12.154 "num_base_bdevs": 4, 00:16:12.154 "num_base_bdevs_discovered": 3, 00:16:12.154 "num_base_bdevs_operational": 3, 00:16:12.154 "process": { 00:16:12.154 "type": "rebuild", 00:16:12.154 "target": "spare", 00:16:12.154 "progress": { 00:16:12.154 "blocks": 16384, 00:16:12.154 "percent": 25 00:16:12.154 } 00:16:12.154 }, 00:16:12.154 "base_bdevs_list": [ 00:16:12.154 { 00:16:12.154 "name": "spare", 00:16:12.154 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:12.154 "is_configured": true, 00:16:12.154 "data_offset": 2048, 00:16:12.154 "data_size": 63488 00:16:12.154 }, 00:16:12.154 { 00:16:12.154 "name": null, 00:16:12.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.154 "is_configured": false, 00:16:12.154 "data_offset": 0, 00:16:12.154 "data_size": 63488 00:16:12.154 }, 00:16:12.154 { 00:16:12.154 "name": "BaseBdev3", 00:16:12.154 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:12.154 "is_configured": true, 00:16:12.154 "data_offset": 2048, 00:16:12.154 "data_size": 63488 00:16:12.154 }, 00:16:12.154 { 00:16:12.154 "name": "BaseBdev4", 00:16:12.154 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:12.154 "is_configured": true, 00:16:12.154 "data_offset": 2048, 00:16:12.154 "data_size": 63488 00:16:12.154 } 00:16:12.154 ] 00:16:12.154 }' 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.154 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.413 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.413 09:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.413 126.75 IOPS, 380.25 MiB/s [2024-10-11T09:49:57.045Z] [2024-10-11 09:49:56.867395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:12.982 [2024-10-11 09:49:57.446460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.241 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.241 111.20 IOPS, 333.60 MiB/s [2024-10-11T09:49:57.873Z] 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.242 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.242 "name": "raid_bdev1", 00:16:13.242 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:13.242 "strip_size_kb": 0, 00:16:13.242 "state": "online", 00:16:13.242 "raid_level": "raid1", 00:16:13.242 "superblock": true, 00:16:13.242 "num_base_bdevs": 4, 00:16:13.242 "num_base_bdevs_discovered": 3, 00:16:13.242 "num_base_bdevs_operational": 3, 00:16:13.242 "process": { 00:16:13.242 "type": "rebuild", 00:16:13.242 "target": "spare", 00:16:13.242 "progress": { 00:16:13.242 "blocks": 32768, 00:16:13.242 "percent": 51 00:16:13.242 } 00:16:13.242 }, 00:16:13.242 "base_bdevs_list": [ 00:16:13.242 { 00:16:13.242 "name": "spare", 00:16:13.242 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:13.242 "is_configured": true, 00:16:13.242 "data_offset": 2048, 00:16:13.242 "data_size": 63488 00:16:13.242 }, 00:16:13.242 { 00:16:13.242 "name": null, 00:16:13.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.242 "is_configured": false, 00:16:13.242 "data_offset": 0, 00:16:13.242 "data_size": 63488 00:16:13.242 }, 00:16:13.242 { 00:16:13.242 "name": "BaseBdev3", 00:16:13.242 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:13.242 "is_configured": true, 00:16:13.242 "data_offset": 2048, 00:16:13.242 "data_size": 63488 00:16:13.242 }, 00:16:13.242 { 00:16:13.242 "name": "BaseBdev4", 00:16:13.242 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:13.242 "is_configured": true, 00:16:13.242 "data_offset": 2048, 00:16:13.242 "data_size": 63488 00:16:13.242 } 00:16:13.242 ] 00:16:13.242 }' 00:16:13.242 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.502 [2024-10-11 09:49:57.875232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:13.502 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.502 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.502 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.502 09:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.762 [2024-10-11 09:49:58.328417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:13.762 [2024-10-11 09:49:58.328719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:14.331 [2024-10-11 09:49:58.711884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:14.331 100.17 IOPS, 300.50 MiB/s [2024-10-11T09:49:58.963Z] 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.331 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.591 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.591 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.591 "name": "raid_bdev1", 00:16:14.591 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:14.591 "strip_size_kb": 0, 00:16:14.591 "state": "online", 00:16:14.591 "raid_level": "raid1", 00:16:14.591 "superblock": true, 00:16:14.591 "num_base_bdevs": 4, 00:16:14.591 "num_base_bdevs_discovered": 3, 00:16:14.591 "num_base_bdevs_operational": 3, 00:16:14.591 "process": { 00:16:14.591 "type": "rebuild", 00:16:14.591 "target": "spare", 00:16:14.591 "progress": { 00:16:14.591 "blocks": 49152, 00:16:14.591 "percent": 77 00:16:14.591 } 00:16:14.591 }, 00:16:14.591 "base_bdevs_list": [ 00:16:14.591 { 00:16:14.591 "name": "spare", 00:16:14.591 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:14.591 "is_configured": true, 00:16:14.591 "data_offset": 2048, 00:16:14.591 "data_size": 63488 00:16:14.591 }, 00:16:14.591 { 00:16:14.591 "name": null, 00:16:14.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.591 "is_configured": false, 00:16:14.591 "data_offset": 0, 00:16:14.591 "data_size": 63488 00:16:14.591 }, 00:16:14.591 { 00:16:14.591 "name": "BaseBdev3", 00:16:14.591 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:14.591 "is_configured": true, 00:16:14.591 "data_offset": 2048, 00:16:14.591 "data_size": 63488 00:16:14.591 }, 00:16:14.591 { 00:16:14.591 "name": "BaseBdev4", 00:16:14.591 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:14.591 "is_configured": true, 00:16:14.591 "data_offset": 2048, 00:16:14.591 "data_size": 63488 00:16:14.591 } 00:16:14.591 ] 00:16:14.591 }' 00:16:14.591 09:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.591 09:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.591 09:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.591 09:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.591 09:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.160 [2024-10-11 09:49:59.485295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:15.160 [2024-10-11 09:49:59.709814] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:15.420 90.86 IOPS, 272.57 MiB/s [2024-10-11T09:50:00.052Z] [2024-10-11 09:49:59.809645] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:15.420 [2024-10-11 09:49:59.818391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.680 "name": "raid_bdev1", 00:16:15.680 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:15.680 "strip_size_kb": 0, 00:16:15.680 "state": "online", 00:16:15.680 "raid_level": "raid1", 00:16:15.680 "superblock": true, 00:16:15.680 "num_base_bdevs": 4, 00:16:15.680 "num_base_bdevs_discovered": 3, 00:16:15.680 "num_base_bdevs_operational": 3, 00:16:15.680 "base_bdevs_list": [ 00:16:15.680 { 00:16:15.680 "name": "spare", 00:16:15.680 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:15.680 "is_configured": true, 00:16:15.680 "data_offset": 2048, 00:16:15.680 "data_size": 63488 00:16:15.680 }, 00:16:15.680 { 00:16:15.680 "name": null, 00:16:15.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.680 "is_configured": false, 00:16:15.680 "data_offset": 0, 00:16:15.680 "data_size": 63488 00:16:15.680 }, 00:16:15.680 { 00:16:15.680 "name": "BaseBdev3", 00:16:15.680 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:15.680 "is_configured": true, 00:16:15.680 "data_offset": 2048, 00:16:15.680 "data_size": 63488 00:16:15.680 }, 00:16:15.680 { 00:16:15.680 "name": "BaseBdev4", 00:16:15.680 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:15.680 "is_configured": true, 00:16:15.680 "data_offset": 2048, 00:16:15.680 "data_size": 63488 00:16:15.680 } 00:16:15.680 ] 00:16:15.680 }' 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.680 "name": "raid_bdev1", 00:16:15.680 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:15.680 "strip_size_kb": 0, 00:16:15.680 "state": "online", 00:16:15.680 "raid_level": "raid1", 00:16:15.680 "superblock": true, 00:16:15.680 "num_base_bdevs": 4, 00:16:15.680 "num_base_bdevs_discovered": 3, 00:16:15.680 "num_base_bdevs_operational": 3, 00:16:15.680 "base_bdevs_list": [ 00:16:15.680 { 00:16:15.680 "name": "spare", 00:16:15.680 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:15.680 "is_configured": true, 00:16:15.680 "data_offset": 2048, 00:16:15.680 "data_size": 63488 00:16:15.680 }, 00:16:15.680 { 00:16:15.680 "name": null, 00:16:15.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.680 "is_configured": false, 00:16:15.680 "data_offset": 0, 00:16:15.680 "data_size": 63488 00:16:15.680 }, 00:16:15.680 { 00:16:15.680 "name": "BaseBdev3", 00:16:15.680 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:15.680 "is_configured": true, 00:16:15.680 "data_offset": 2048, 00:16:15.680 "data_size": 63488 00:16:15.680 }, 00:16:15.680 { 00:16:15.680 "name": "BaseBdev4", 00:16:15.680 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:15.680 "is_configured": true, 00:16:15.680 "data_offset": 2048, 00:16:15.680 "data_size": 63488 00:16:15.680 } 00:16:15.680 ] 00:16:15.680 }' 00:16:15.680 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.939 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.939 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.939 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.939 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:15.939 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.940 "name": "raid_bdev1", 00:16:15.940 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:15.940 "strip_size_kb": 0, 00:16:15.940 "state": "online", 00:16:15.940 "raid_level": "raid1", 00:16:15.940 "superblock": true, 00:16:15.940 "num_base_bdevs": 4, 00:16:15.940 "num_base_bdevs_discovered": 3, 00:16:15.940 "num_base_bdevs_operational": 3, 00:16:15.940 "base_bdevs_list": [ 00:16:15.940 { 00:16:15.940 "name": "spare", 00:16:15.940 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:15.940 "is_configured": true, 00:16:15.940 "data_offset": 2048, 00:16:15.940 "data_size": 63488 00:16:15.940 }, 00:16:15.940 { 00:16:15.940 "name": null, 00:16:15.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.940 "is_configured": false, 00:16:15.940 "data_offset": 0, 00:16:15.940 "data_size": 63488 00:16:15.940 }, 00:16:15.940 { 00:16:15.940 "name": "BaseBdev3", 00:16:15.940 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:15.940 "is_configured": true, 00:16:15.940 "data_offset": 2048, 00:16:15.940 "data_size": 63488 00:16:15.940 }, 00:16:15.940 { 00:16:15.940 "name": "BaseBdev4", 00:16:15.940 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:15.940 "is_configured": true, 00:16:15.940 "data_offset": 2048, 00:16:15.940 "data_size": 63488 00:16:15.940 } 00:16:15.940 ] 00:16:15.940 }' 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.940 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.199 83.00 IOPS, 249.00 MiB/s [2024-10-11T09:50:00.831Z] 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.199 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.199 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.199 [2024-10-11 09:50:00.828750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.199 [2024-10-11 09:50:00.828826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.459 00:16:16.459 Latency(us) 00:16:16.459 [2024-10-11T09:50:01.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.459 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:16.459 raid_bdev1 : 8.13 82.12 246.37 0.00 0.00 17199.77 316.59 121799.66 00:16:16.459 [2024-10-11T09:50:01.091Z] =================================================================================================================== 00:16:16.459 [2024-10-11T09:50:01.091Z] Total : 82.12 246.37 0.00 0.00 17199.77 316.59 121799.66 00:16:16.459 [2024-10-11 09:50:00.936340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.459 { 00:16:16.459 "results": [ 00:16:16.459 { 00:16:16.459 "job": "raid_bdev1", 00:16:16.459 "core_mask": "0x1", 00:16:16.459 "workload": "randrw", 00:16:16.459 "percentage": 50, 00:16:16.459 "status": "finished", 00:16:16.459 "queue_depth": 2, 00:16:16.459 "io_size": 3145728, 00:16:16.459 "runtime": 8.134071, 00:16:16.459 "iops": 82.12369919072503, 00:16:16.459 "mibps": 246.3710975721751, 00:16:16.459 "io_failed": 0, 00:16:16.459 "io_timeout": 0, 00:16:16.459 "avg_latency_us": 17199.772653819, 00:16:16.459 "min_latency_us": 316.5903930131004, 00:16:16.459 "max_latency_us": 121799.6576419214 00:16:16.459 } 00:16:16.459 ], 00:16:16.459 "core_count": 1 00:16:16.459 } 00:16:16.459 [2024-10-11 09:50:00.936430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.459 [2024-10-11 09:50:00.936531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.459 [2024-10-11 09:50:00.936544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.459 09:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:16.719 /dev/nbd0 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.719 1+0 records in 00:16:16.719 1+0 records out 00:16:16.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549058 s, 7.5 MB/s 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:16.719 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.720 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:16.980 /dev/nbd1 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.980 1+0 records in 00:16:16.980 1+0 records out 00:16:16.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351645 s, 11.6 MB/s 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.980 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.239 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.501 09:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:17.761 /dev/nbd1 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.761 1+0 records in 00:16:17.761 1+0 records out 00:16:17.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412857 s, 9.9 MB/s 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.761 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.021 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 [2024-10-11 09:50:02.718355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.280 [2024-10-11 09:50:02.718470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.280 [2024-10-11 09:50:02.718508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:18.280 [2024-10-11 09:50:02.718539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.280 [2024-10-11 09:50:02.720884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.280 [2024-10-11 09:50:02.720963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.280 spare 00:16:18.280 [2024-10-11 09:50:02.721083] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:18.280 [2024-10-11 09:50:02.721153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.280 [2024-10-11 09:50:02.721282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.280 [2024-10-11 09:50:02.721379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 [2024-10-11 09:50:02.821286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:18.280 [2024-10-11 09:50:02.821343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.280 [2024-10-11 09:50:02.821688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:18.280 [2024-10-11 09:50:02.821972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:18.280 [2024-10-11 09:50:02.822003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:18.280 [2024-10-11 09:50:02.822267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.280 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.281 "name": "raid_bdev1", 00:16:18.281 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:18.281 "strip_size_kb": 0, 00:16:18.281 "state": "online", 00:16:18.281 "raid_level": "raid1", 00:16:18.281 "superblock": true, 00:16:18.281 "num_base_bdevs": 4, 00:16:18.281 "num_base_bdevs_discovered": 3, 00:16:18.281 "num_base_bdevs_operational": 3, 00:16:18.281 "base_bdevs_list": [ 00:16:18.281 { 00:16:18.281 "name": "spare", 00:16:18.281 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:18.281 "is_configured": true, 00:16:18.281 "data_offset": 2048, 00:16:18.281 "data_size": 63488 00:16:18.281 }, 00:16:18.281 { 00:16:18.281 "name": null, 00:16:18.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.281 "is_configured": false, 00:16:18.281 "data_offset": 2048, 00:16:18.281 "data_size": 63488 00:16:18.281 }, 00:16:18.281 { 00:16:18.281 "name": "BaseBdev3", 00:16:18.281 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:18.281 "is_configured": true, 00:16:18.281 "data_offset": 2048, 00:16:18.281 "data_size": 63488 00:16:18.281 }, 00:16:18.281 { 00:16:18.281 "name": "BaseBdev4", 00:16:18.281 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:18.281 "is_configured": true, 00:16:18.281 "data_offset": 2048, 00:16:18.281 "data_size": 63488 00:16:18.281 } 00:16:18.281 ] 00:16:18.281 }' 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.281 09:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.849 "name": "raid_bdev1", 00:16:18.849 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:18.849 "strip_size_kb": 0, 00:16:18.849 "state": "online", 00:16:18.849 "raid_level": "raid1", 00:16:18.849 "superblock": true, 00:16:18.849 "num_base_bdevs": 4, 00:16:18.849 "num_base_bdevs_discovered": 3, 00:16:18.849 "num_base_bdevs_operational": 3, 00:16:18.849 "base_bdevs_list": [ 00:16:18.849 { 00:16:18.849 "name": "spare", 00:16:18.849 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:18.849 "is_configured": true, 00:16:18.849 "data_offset": 2048, 00:16:18.849 "data_size": 63488 00:16:18.849 }, 00:16:18.849 { 00:16:18.849 "name": null, 00:16:18.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.849 "is_configured": false, 00:16:18.849 "data_offset": 2048, 00:16:18.849 "data_size": 63488 00:16:18.849 }, 00:16:18.849 { 00:16:18.849 "name": "BaseBdev3", 00:16:18.849 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:18.849 "is_configured": true, 00:16:18.849 "data_offset": 2048, 00:16:18.849 "data_size": 63488 00:16:18.849 }, 00:16:18.849 { 00:16:18.849 "name": "BaseBdev4", 00:16:18.849 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:18.849 "is_configured": true, 00:16:18.849 "data_offset": 2048, 00:16:18.849 "data_size": 63488 00:16:18.849 } 00:16:18.849 ] 00:16:18.849 }' 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.849 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:18.850 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.850 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.850 [2024-10-11 09:50:03.477255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.108 "name": "raid_bdev1", 00:16:19.108 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:19.108 "strip_size_kb": 0, 00:16:19.108 "state": "online", 00:16:19.108 "raid_level": "raid1", 00:16:19.108 "superblock": true, 00:16:19.108 "num_base_bdevs": 4, 00:16:19.108 "num_base_bdevs_discovered": 2, 00:16:19.108 "num_base_bdevs_operational": 2, 00:16:19.108 "base_bdevs_list": [ 00:16:19.109 { 00:16:19.109 "name": null, 00:16:19.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.109 "is_configured": false, 00:16:19.109 "data_offset": 0, 00:16:19.109 "data_size": 63488 00:16:19.109 }, 00:16:19.109 { 00:16:19.109 "name": null, 00:16:19.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.109 "is_configured": false, 00:16:19.109 "data_offset": 2048, 00:16:19.109 "data_size": 63488 00:16:19.109 }, 00:16:19.109 { 00:16:19.109 "name": "BaseBdev3", 00:16:19.109 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:19.109 "is_configured": true, 00:16:19.109 "data_offset": 2048, 00:16:19.109 "data_size": 63488 00:16:19.109 }, 00:16:19.109 { 00:16:19.109 "name": "BaseBdev4", 00:16:19.109 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:19.109 "is_configured": true, 00:16:19.109 "data_offset": 2048, 00:16:19.109 "data_size": 63488 00:16:19.109 } 00:16:19.109 ] 00:16:19.109 }' 00:16:19.109 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.109 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.373 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:19.373 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.373 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.373 [2024-10-11 09:50:03.868657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.373 [2024-10-11 09:50:03.868938] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:19.373 [2024-10-11 09:50:03.869004] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:19.373 [2024-10-11 09:50:03.869048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.373 [2024-10-11 09:50:03.884926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:19.373 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.373 09:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:19.373 [2024-10-11 09:50:03.887010] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.311 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.569 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.569 "name": "raid_bdev1", 00:16:20.569 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:20.569 "strip_size_kb": 0, 00:16:20.569 "state": "online", 00:16:20.569 "raid_level": "raid1", 00:16:20.569 "superblock": true, 00:16:20.569 "num_base_bdevs": 4, 00:16:20.569 "num_base_bdevs_discovered": 3, 00:16:20.569 "num_base_bdevs_operational": 3, 00:16:20.569 "process": { 00:16:20.569 "type": "rebuild", 00:16:20.569 "target": "spare", 00:16:20.569 "progress": { 00:16:20.569 "blocks": 20480, 00:16:20.569 "percent": 32 00:16:20.569 } 00:16:20.569 }, 00:16:20.569 "base_bdevs_list": [ 00:16:20.569 { 00:16:20.569 "name": "spare", 00:16:20.569 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:20.569 "is_configured": true, 00:16:20.569 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 }, 00:16:20.570 { 00:16:20.570 "name": null, 00:16:20.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.570 "is_configured": false, 00:16:20.570 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 }, 00:16:20.570 { 00:16:20.570 "name": "BaseBdev3", 00:16:20.570 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:20.570 "is_configured": true, 00:16:20.570 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 }, 00:16:20.570 { 00:16:20.570 "name": "BaseBdev4", 00:16:20.570 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:20.570 "is_configured": true, 00:16:20.570 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 } 00:16:20.570 ] 00:16:20.570 }' 00:16:20.570 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.570 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.570 09:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.570 [2024-10-11 09:50:05.043086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.570 [2024-10-11 09:50:05.092781] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.570 [2024-10-11 09:50:05.092945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.570 [2024-10-11 09:50:05.093000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.570 [2024-10-11 09:50:05.093030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.570 "name": "raid_bdev1", 00:16:20.570 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:20.570 "strip_size_kb": 0, 00:16:20.570 "state": "online", 00:16:20.570 "raid_level": "raid1", 00:16:20.570 "superblock": true, 00:16:20.570 "num_base_bdevs": 4, 00:16:20.570 "num_base_bdevs_discovered": 2, 00:16:20.570 "num_base_bdevs_operational": 2, 00:16:20.570 "base_bdevs_list": [ 00:16:20.570 { 00:16:20.570 "name": null, 00:16:20.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.570 "is_configured": false, 00:16:20.570 "data_offset": 0, 00:16:20.570 "data_size": 63488 00:16:20.570 }, 00:16:20.570 { 00:16:20.570 "name": null, 00:16:20.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.570 "is_configured": false, 00:16:20.570 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 }, 00:16:20.570 { 00:16:20.570 "name": "BaseBdev3", 00:16:20.570 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:20.570 "is_configured": true, 00:16:20.570 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 }, 00:16:20.570 { 00:16:20.570 "name": "BaseBdev4", 00:16:20.570 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:20.570 "is_configured": true, 00:16:20.570 "data_offset": 2048, 00:16:20.570 "data_size": 63488 00:16:20.570 } 00:16:20.570 ] 00:16:20.570 }' 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.570 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.139 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.139 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.139 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.139 [2024-10-11 09:50:05.585535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.139 [2024-10-11 09:50:05.585678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.139 [2024-10-11 09:50:05.585726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:21.139 [2024-10-11 09:50:05.585779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.139 [2024-10-11 09:50:05.586393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.139 [2024-10-11 09:50:05.586470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.139 [2024-10-11 09:50:05.586610] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:21.139 [2024-10-11 09:50:05.586636] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:21.139 [2024-10-11 09:50:05.586649] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:21.139 [2024-10-11 09:50:05.586675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.139 spare 00:16:21.139 [2024-10-11 09:50:05.603097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:21.139 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.139 [2024-10-11 09:50:05.604996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.139 09:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.086 "name": "raid_bdev1", 00:16:22.086 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:22.086 "strip_size_kb": 0, 00:16:22.086 "state": "online", 00:16:22.086 "raid_level": "raid1", 00:16:22.086 "superblock": true, 00:16:22.086 "num_base_bdevs": 4, 00:16:22.086 "num_base_bdevs_discovered": 3, 00:16:22.086 "num_base_bdevs_operational": 3, 00:16:22.086 "process": { 00:16:22.086 "type": "rebuild", 00:16:22.086 "target": "spare", 00:16:22.086 "progress": { 00:16:22.086 "blocks": 20480, 00:16:22.086 "percent": 32 00:16:22.086 } 00:16:22.086 }, 00:16:22.086 "base_bdevs_list": [ 00:16:22.086 { 00:16:22.086 "name": "spare", 00:16:22.086 "uuid": "275b8dae-8e37-569a-9ca6-37524165db2a", 00:16:22.086 "is_configured": true, 00:16:22.086 "data_offset": 2048, 00:16:22.086 "data_size": 63488 00:16:22.086 }, 00:16:22.086 { 00:16:22.086 "name": null, 00:16:22.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.086 "is_configured": false, 00:16:22.086 "data_offset": 2048, 00:16:22.086 "data_size": 63488 00:16:22.086 }, 00:16:22.086 { 00:16:22.086 "name": "BaseBdev3", 00:16:22.086 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:22.086 "is_configured": true, 00:16:22.086 "data_offset": 2048, 00:16:22.086 "data_size": 63488 00:16:22.086 }, 00:16:22.086 { 00:16:22.086 "name": "BaseBdev4", 00:16:22.086 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:22.086 "is_configured": true, 00:16:22.086 "data_offset": 2048, 00:16:22.086 "data_size": 63488 00:16:22.086 } 00:16:22.086 ] 00:16:22.086 }' 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.086 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.344 [2024-10-11 09:50:06.772685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.344 [2024-10-11 09:50:06.810967] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.344 [2024-10-11 09:50:06.811129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.344 [2024-10-11 09:50:06.811154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.344 [2024-10-11 09:50:06.811164] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.344 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.344 "name": "raid_bdev1", 00:16:22.344 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:22.345 "strip_size_kb": 0, 00:16:22.345 "state": "online", 00:16:22.345 "raid_level": "raid1", 00:16:22.345 "superblock": true, 00:16:22.345 "num_base_bdevs": 4, 00:16:22.345 "num_base_bdevs_discovered": 2, 00:16:22.345 "num_base_bdevs_operational": 2, 00:16:22.345 "base_bdevs_list": [ 00:16:22.345 { 00:16:22.345 "name": null, 00:16:22.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.345 "is_configured": false, 00:16:22.345 "data_offset": 0, 00:16:22.345 "data_size": 63488 00:16:22.345 }, 00:16:22.345 { 00:16:22.345 "name": null, 00:16:22.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.345 "is_configured": false, 00:16:22.345 "data_offset": 2048, 00:16:22.345 "data_size": 63488 00:16:22.345 }, 00:16:22.345 { 00:16:22.345 "name": "BaseBdev3", 00:16:22.345 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:22.345 "is_configured": true, 00:16:22.345 "data_offset": 2048, 00:16:22.345 "data_size": 63488 00:16:22.345 }, 00:16:22.345 { 00:16:22.345 "name": "BaseBdev4", 00:16:22.345 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:22.345 "is_configured": true, 00:16:22.345 "data_offset": 2048, 00:16:22.345 "data_size": 63488 00:16:22.345 } 00:16:22.345 ] 00:16:22.345 }' 00:16:22.345 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.345 09:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.913 "name": "raid_bdev1", 00:16:22.913 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:22.913 "strip_size_kb": 0, 00:16:22.913 "state": "online", 00:16:22.913 "raid_level": "raid1", 00:16:22.913 "superblock": true, 00:16:22.913 "num_base_bdevs": 4, 00:16:22.913 "num_base_bdevs_discovered": 2, 00:16:22.913 "num_base_bdevs_operational": 2, 00:16:22.913 "base_bdevs_list": [ 00:16:22.913 { 00:16:22.913 "name": null, 00:16:22.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.913 "is_configured": false, 00:16:22.913 "data_offset": 0, 00:16:22.913 "data_size": 63488 00:16:22.913 }, 00:16:22.913 { 00:16:22.913 "name": null, 00:16:22.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.913 "is_configured": false, 00:16:22.913 "data_offset": 2048, 00:16:22.913 "data_size": 63488 00:16:22.913 }, 00:16:22.913 { 00:16:22.913 "name": "BaseBdev3", 00:16:22.913 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:22.913 "is_configured": true, 00:16:22.913 "data_offset": 2048, 00:16:22.913 "data_size": 63488 00:16:22.913 }, 00:16:22.913 { 00:16:22.913 "name": "BaseBdev4", 00:16:22.913 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:22.913 "is_configured": true, 00:16:22.913 "data_offset": 2048, 00:16:22.913 "data_size": 63488 00:16:22.913 } 00:16:22.913 ] 00:16:22.913 }' 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.913 [2024-10-11 09:50:07.486648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.913 [2024-10-11 09:50:07.486775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.913 [2024-10-11 09:50:07.486822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:22.913 [2024-10-11 09:50:07.486901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.913 [2024-10-11 09:50:07.487436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.913 [2024-10-11 09:50:07.487509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.913 [2024-10-11 09:50:07.487629] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:22.913 [2024-10-11 09:50:07.487670] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:22.913 [2024-10-11 09:50:07.487690] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:22.913 [2024-10-11 09:50:07.487701] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:22.913 BaseBdev1 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.913 09:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.291 "name": "raid_bdev1", 00:16:24.291 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:24.291 "strip_size_kb": 0, 00:16:24.291 "state": "online", 00:16:24.291 "raid_level": "raid1", 00:16:24.291 "superblock": true, 00:16:24.291 "num_base_bdevs": 4, 00:16:24.291 "num_base_bdevs_discovered": 2, 00:16:24.291 "num_base_bdevs_operational": 2, 00:16:24.291 "base_bdevs_list": [ 00:16:24.291 { 00:16:24.291 "name": null, 00:16:24.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.291 "is_configured": false, 00:16:24.291 "data_offset": 0, 00:16:24.291 "data_size": 63488 00:16:24.291 }, 00:16:24.291 { 00:16:24.291 "name": null, 00:16:24.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.291 "is_configured": false, 00:16:24.291 "data_offset": 2048, 00:16:24.291 "data_size": 63488 00:16:24.291 }, 00:16:24.291 { 00:16:24.291 "name": "BaseBdev3", 00:16:24.291 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:24.291 "is_configured": true, 00:16:24.291 "data_offset": 2048, 00:16:24.291 "data_size": 63488 00:16:24.291 }, 00:16:24.291 { 00:16:24.291 "name": "BaseBdev4", 00:16:24.291 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:24.291 "is_configured": true, 00:16:24.291 "data_offset": 2048, 00:16:24.291 "data_size": 63488 00:16:24.291 } 00:16:24.291 ] 00:16:24.291 }' 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.291 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.551 09:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.551 "name": "raid_bdev1", 00:16:24.551 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:24.551 "strip_size_kb": 0, 00:16:24.551 "state": "online", 00:16:24.551 "raid_level": "raid1", 00:16:24.551 "superblock": true, 00:16:24.551 "num_base_bdevs": 4, 00:16:24.551 "num_base_bdevs_discovered": 2, 00:16:24.551 "num_base_bdevs_operational": 2, 00:16:24.551 "base_bdevs_list": [ 00:16:24.551 { 00:16:24.551 "name": null, 00:16:24.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.551 "is_configured": false, 00:16:24.551 "data_offset": 0, 00:16:24.551 "data_size": 63488 00:16:24.551 }, 00:16:24.551 { 00:16:24.551 "name": null, 00:16:24.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.551 "is_configured": false, 00:16:24.551 "data_offset": 2048, 00:16:24.551 "data_size": 63488 00:16:24.551 }, 00:16:24.551 { 00:16:24.551 "name": "BaseBdev3", 00:16:24.551 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:24.551 "is_configured": true, 00:16:24.551 "data_offset": 2048, 00:16:24.551 "data_size": 63488 00:16:24.551 }, 00:16:24.551 { 00:16:24.551 "name": "BaseBdev4", 00:16:24.551 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:24.551 "is_configured": true, 00:16:24.551 "data_offset": 2048, 00:16:24.551 "data_size": 63488 00:16:24.551 } 00:16:24.551 ] 00:16:24.551 }' 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.551 [2024-10-11 09:50:09.096253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.551 [2024-10-11 09:50:09.096504] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:24.551 [2024-10-11 09:50:09.096563] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:24.551 request: 00:16:24.551 { 00:16:24.551 "base_bdev": "BaseBdev1", 00:16:24.551 "raid_bdev": "raid_bdev1", 00:16:24.551 "method": "bdev_raid_add_base_bdev", 00:16:24.551 "req_id": 1 00:16:24.551 } 00:16:24.551 Got JSON-RPC error response 00:16:24.551 response: 00:16:24.551 { 00:16:24.551 "code": -22, 00:16:24.551 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:24.551 } 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.551 09:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.487 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.747 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.747 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.747 "name": "raid_bdev1", 00:16:25.747 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:25.747 "strip_size_kb": 0, 00:16:25.747 "state": "online", 00:16:25.747 "raid_level": "raid1", 00:16:25.747 "superblock": true, 00:16:25.747 "num_base_bdevs": 4, 00:16:25.747 "num_base_bdevs_discovered": 2, 00:16:25.747 "num_base_bdevs_operational": 2, 00:16:25.747 "base_bdevs_list": [ 00:16:25.747 { 00:16:25.747 "name": null, 00:16:25.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.747 "is_configured": false, 00:16:25.747 "data_offset": 0, 00:16:25.747 "data_size": 63488 00:16:25.747 }, 00:16:25.747 { 00:16:25.747 "name": null, 00:16:25.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.747 "is_configured": false, 00:16:25.747 "data_offset": 2048, 00:16:25.747 "data_size": 63488 00:16:25.747 }, 00:16:25.747 { 00:16:25.747 "name": "BaseBdev3", 00:16:25.747 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:25.747 "is_configured": true, 00:16:25.747 "data_offset": 2048, 00:16:25.747 "data_size": 63488 00:16:25.747 }, 00:16:25.747 { 00:16:25.747 "name": "BaseBdev4", 00:16:25.747 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:25.747 "is_configured": true, 00:16:25.747 "data_offset": 2048, 00:16:25.747 "data_size": 63488 00:16:25.747 } 00:16:25.747 ] 00:16:25.747 }' 00:16:25.747 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.747 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.006 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.007 "name": "raid_bdev1", 00:16:26.007 "uuid": "5674982e-ab87-49e7-84ba-927ef22768d8", 00:16:26.007 "strip_size_kb": 0, 00:16:26.007 "state": "online", 00:16:26.007 "raid_level": "raid1", 00:16:26.007 "superblock": true, 00:16:26.007 "num_base_bdevs": 4, 00:16:26.007 "num_base_bdevs_discovered": 2, 00:16:26.007 "num_base_bdevs_operational": 2, 00:16:26.007 "base_bdevs_list": [ 00:16:26.007 { 00:16:26.007 "name": null, 00:16:26.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.007 "is_configured": false, 00:16:26.007 "data_offset": 0, 00:16:26.007 "data_size": 63488 00:16:26.007 }, 00:16:26.007 { 00:16:26.007 "name": null, 00:16:26.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.007 "is_configured": false, 00:16:26.007 "data_offset": 2048, 00:16:26.007 "data_size": 63488 00:16:26.007 }, 00:16:26.007 { 00:16:26.007 "name": "BaseBdev3", 00:16:26.007 "uuid": "dc03771f-5b3c-5dfb-a69b-4512af3e5e76", 00:16:26.007 "is_configured": true, 00:16:26.007 "data_offset": 2048, 00:16:26.007 "data_size": 63488 00:16:26.007 }, 00:16:26.007 { 00:16:26.007 "name": "BaseBdev4", 00:16:26.007 "uuid": "2e93ea6f-05dd-5fab-93f6-14339240fa82", 00:16:26.007 "is_configured": true, 00:16:26.007 "data_offset": 2048, 00:16:26.007 "data_size": 63488 00:16:26.007 } 00:16:26.007 ] 00:16:26.007 }' 00:16:26.007 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.266 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.266 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79743 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79743 ']' 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79743 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79743 00:16:26.267 killing process with pid 79743 00:16:26.267 Received shutdown signal, test time was about 18.001665 seconds 00:16:26.267 00:16:26.267 Latency(us) 00:16:26.267 [2024-10-11T09:50:10.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.267 [2024-10-11T09:50:10.899Z] =================================================================================================================== 00:16:26.267 [2024-10-11T09:50:10.899Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79743' 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79743 00:16:26.267 [2024-10-11 09:50:10.765104] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.267 [2024-10-11 09:50:10.765242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.267 09:50:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79743 00:16:26.267 [2024-10-11 09:50:10.765310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.267 [2024-10-11 09:50:10.765324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:26.834 [2024-10-11 09:50:11.167590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.774 ************************************ 00:16:27.774 END TEST raid_rebuild_test_sb_io 00:16:27.774 ************************************ 00:16:27.774 09:50:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:27.774 00:16:27.774 real 0m21.477s 00:16:27.774 user 0m28.113s 00:16:27.774 sys 0m2.570s 00:16:27.774 09:50:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.774 09:50:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.774 09:50:12 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:27.774 09:50:12 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:27.774 09:50:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:27.774 09:50:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.774 09:50:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.774 ************************************ 00:16:27.774 START TEST raid5f_state_function_test 00:16:27.774 ************************************ 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:27.774 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80465 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80465' 00:16:27.775 Process raid pid: 80465 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80465 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80465 ']' 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.775 09:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.037 [2024-10-11 09:50:12.467723] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:16:28.037 [2024-10-11 09:50:12.467931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.037 [2024-10-11 09:50:12.633148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.295 [2024-10-11 09:50:12.760987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.555 [2024-10-11 09:50:12.989171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.555 [2024-10-11 09:50:12.989207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.815 [2024-10-11 09:50:13.291814] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.815 [2024-10-11 09:50:13.291918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.815 [2024-10-11 09:50:13.291952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.815 [2024-10-11 09:50:13.291977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.815 [2024-10-11 09:50:13.291995] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.815 [2024-10-11 09:50:13.292017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.815 "name": "Existed_Raid", 00:16:28.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.815 "strip_size_kb": 64, 00:16:28.815 "state": "configuring", 00:16:28.815 "raid_level": "raid5f", 00:16:28.815 "superblock": false, 00:16:28.815 "num_base_bdevs": 3, 00:16:28.815 "num_base_bdevs_discovered": 0, 00:16:28.815 "num_base_bdevs_operational": 3, 00:16:28.815 "base_bdevs_list": [ 00:16:28.815 { 00:16:28.815 "name": "BaseBdev1", 00:16:28.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.815 "is_configured": false, 00:16:28.815 "data_offset": 0, 00:16:28.815 "data_size": 0 00:16:28.815 }, 00:16:28.815 { 00:16:28.815 "name": "BaseBdev2", 00:16:28.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.815 "is_configured": false, 00:16:28.815 "data_offset": 0, 00:16:28.815 "data_size": 0 00:16:28.815 }, 00:16:28.815 { 00:16:28.815 "name": "BaseBdev3", 00:16:28.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.815 "is_configured": false, 00:16:28.815 "data_offset": 0, 00:16:28.815 "data_size": 0 00:16:28.815 } 00:16:28.815 ] 00:16:28.815 }' 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.815 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 [2024-10-11 09:50:13.762979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.381 [2024-10-11 09:50:13.763068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 [2024-10-11 09:50:13.771021] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.381 [2024-10-11 09:50:13.771077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.381 [2024-10-11 09:50:13.771089] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.381 [2024-10-11 09:50:13.771102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.381 [2024-10-11 09:50:13.771111] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.381 [2024-10-11 09:50:13.771122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 [2024-10-11 09:50:13.828990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.381 BaseBdev1 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.381 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.381 [ 00:16:29.381 { 00:16:29.381 "name": "BaseBdev1", 00:16:29.381 "aliases": [ 00:16:29.381 "0bce51ca-f0bd-48e2-aaeb-7ba67794d942" 00:16:29.381 ], 00:16:29.381 "product_name": "Malloc disk", 00:16:29.381 "block_size": 512, 00:16:29.381 "num_blocks": 65536, 00:16:29.381 "uuid": "0bce51ca-f0bd-48e2-aaeb-7ba67794d942", 00:16:29.381 "assigned_rate_limits": { 00:16:29.381 "rw_ios_per_sec": 0, 00:16:29.381 "rw_mbytes_per_sec": 0, 00:16:29.381 "r_mbytes_per_sec": 0, 00:16:29.381 "w_mbytes_per_sec": 0 00:16:29.381 }, 00:16:29.381 "claimed": true, 00:16:29.381 "claim_type": "exclusive_write", 00:16:29.381 "zoned": false, 00:16:29.381 "supported_io_types": { 00:16:29.381 "read": true, 00:16:29.381 "write": true, 00:16:29.381 "unmap": true, 00:16:29.381 "flush": true, 00:16:29.381 "reset": true, 00:16:29.381 "nvme_admin": false, 00:16:29.381 "nvme_io": false, 00:16:29.381 "nvme_io_md": false, 00:16:29.381 "write_zeroes": true, 00:16:29.381 "zcopy": true, 00:16:29.381 "get_zone_info": false, 00:16:29.381 "zone_management": false, 00:16:29.381 "zone_append": false, 00:16:29.381 "compare": false, 00:16:29.381 "compare_and_write": false, 00:16:29.381 "abort": true, 00:16:29.381 "seek_hole": false, 00:16:29.382 "seek_data": false, 00:16:29.382 "copy": true, 00:16:29.382 "nvme_iov_md": false 00:16:29.382 }, 00:16:29.382 "memory_domains": [ 00:16:29.382 { 00:16:29.382 "dma_device_id": "system", 00:16:29.382 "dma_device_type": 1 00:16:29.382 }, 00:16:29.382 { 00:16:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.382 "dma_device_type": 2 00:16:29.382 } 00:16:29.382 ], 00:16:29.382 "driver_specific": {} 00:16:29.382 } 00:16:29.382 ] 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.382 "name": "Existed_Raid", 00:16:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.382 "strip_size_kb": 64, 00:16:29.382 "state": "configuring", 00:16:29.382 "raid_level": "raid5f", 00:16:29.382 "superblock": false, 00:16:29.382 "num_base_bdevs": 3, 00:16:29.382 "num_base_bdevs_discovered": 1, 00:16:29.382 "num_base_bdevs_operational": 3, 00:16:29.382 "base_bdevs_list": [ 00:16:29.382 { 00:16:29.382 "name": "BaseBdev1", 00:16:29.382 "uuid": "0bce51ca-f0bd-48e2-aaeb-7ba67794d942", 00:16:29.382 "is_configured": true, 00:16:29.382 "data_offset": 0, 00:16:29.382 "data_size": 65536 00:16:29.382 }, 00:16:29.382 { 00:16:29.382 "name": "BaseBdev2", 00:16:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.382 "is_configured": false, 00:16:29.382 "data_offset": 0, 00:16:29.382 "data_size": 0 00:16:29.382 }, 00:16:29.382 { 00:16:29.382 "name": "BaseBdev3", 00:16:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.382 "is_configured": false, 00:16:29.382 "data_offset": 0, 00:16:29.382 "data_size": 0 00:16:29.382 } 00:16:29.382 ] 00:16:29.382 }' 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.382 09:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.640 [2024-10-11 09:50:14.228378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.640 [2024-10-11 09:50:14.228490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.640 [2024-10-11 09:50:14.240429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.640 [2024-10-11 09:50:14.242649] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.640 [2024-10-11 09:50:14.242758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.640 [2024-10-11 09:50:14.242803] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.640 [2024-10-11 09:50:14.242832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.640 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.899 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.899 "name": "Existed_Raid", 00:16:29.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.899 "strip_size_kb": 64, 00:16:29.899 "state": "configuring", 00:16:29.899 "raid_level": "raid5f", 00:16:29.899 "superblock": false, 00:16:29.899 "num_base_bdevs": 3, 00:16:29.899 "num_base_bdevs_discovered": 1, 00:16:29.899 "num_base_bdevs_operational": 3, 00:16:29.899 "base_bdevs_list": [ 00:16:29.899 { 00:16:29.899 "name": "BaseBdev1", 00:16:29.899 "uuid": "0bce51ca-f0bd-48e2-aaeb-7ba67794d942", 00:16:29.899 "is_configured": true, 00:16:29.899 "data_offset": 0, 00:16:29.899 "data_size": 65536 00:16:29.899 }, 00:16:29.899 { 00:16:29.899 "name": "BaseBdev2", 00:16:29.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.899 "is_configured": false, 00:16:29.899 "data_offset": 0, 00:16:29.899 "data_size": 0 00:16:29.899 }, 00:16:29.899 { 00:16:29.899 "name": "BaseBdev3", 00:16:29.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.899 "is_configured": false, 00:16:29.899 "data_offset": 0, 00:16:29.899 "data_size": 0 00:16:29.899 } 00:16:29.899 ] 00:16:29.899 }' 00:16:29.899 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.899 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.159 [2024-10-11 09:50:14.739494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.159 BaseBdev2 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:30.159 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.160 [ 00:16:30.160 { 00:16:30.160 "name": "BaseBdev2", 00:16:30.160 "aliases": [ 00:16:30.160 "37be6da9-a62c-42ba-a9f9-58643433dff4" 00:16:30.160 ], 00:16:30.160 "product_name": "Malloc disk", 00:16:30.160 "block_size": 512, 00:16:30.160 "num_blocks": 65536, 00:16:30.160 "uuid": "37be6da9-a62c-42ba-a9f9-58643433dff4", 00:16:30.160 "assigned_rate_limits": { 00:16:30.160 "rw_ios_per_sec": 0, 00:16:30.160 "rw_mbytes_per_sec": 0, 00:16:30.160 "r_mbytes_per_sec": 0, 00:16:30.160 "w_mbytes_per_sec": 0 00:16:30.160 }, 00:16:30.160 "claimed": true, 00:16:30.160 "claim_type": "exclusive_write", 00:16:30.160 "zoned": false, 00:16:30.160 "supported_io_types": { 00:16:30.160 "read": true, 00:16:30.160 "write": true, 00:16:30.160 "unmap": true, 00:16:30.160 "flush": true, 00:16:30.160 "reset": true, 00:16:30.160 "nvme_admin": false, 00:16:30.160 "nvme_io": false, 00:16:30.160 "nvme_io_md": false, 00:16:30.160 "write_zeroes": true, 00:16:30.160 "zcopy": true, 00:16:30.160 "get_zone_info": false, 00:16:30.160 "zone_management": false, 00:16:30.160 "zone_append": false, 00:16:30.160 "compare": false, 00:16:30.160 "compare_and_write": false, 00:16:30.160 "abort": true, 00:16:30.160 "seek_hole": false, 00:16:30.160 "seek_data": false, 00:16:30.160 "copy": true, 00:16:30.160 "nvme_iov_md": false 00:16:30.160 }, 00:16:30.160 "memory_domains": [ 00:16:30.160 { 00:16:30.160 "dma_device_id": "system", 00:16:30.160 "dma_device_type": 1 00:16:30.160 }, 00:16:30.160 { 00:16:30.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.160 "dma_device_type": 2 00:16:30.160 } 00:16:30.160 ], 00:16:30.160 "driver_specific": {} 00:16:30.160 } 00:16:30.160 ] 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.160 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.419 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.419 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.419 "name": "Existed_Raid", 00:16:30.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.419 "strip_size_kb": 64, 00:16:30.419 "state": "configuring", 00:16:30.419 "raid_level": "raid5f", 00:16:30.419 "superblock": false, 00:16:30.419 "num_base_bdevs": 3, 00:16:30.419 "num_base_bdevs_discovered": 2, 00:16:30.419 "num_base_bdevs_operational": 3, 00:16:30.419 "base_bdevs_list": [ 00:16:30.419 { 00:16:30.419 "name": "BaseBdev1", 00:16:30.419 "uuid": "0bce51ca-f0bd-48e2-aaeb-7ba67794d942", 00:16:30.419 "is_configured": true, 00:16:30.419 "data_offset": 0, 00:16:30.419 "data_size": 65536 00:16:30.419 }, 00:16:30.419 { 00:16:30.419 "name": "BaseBdev2", 00:16:30.419 "uuid": "37be6da9-a62c-42ba-a9f9-58643433dff4", 00:16:30.419 "is_configured": true, 00:16:30.419 "data_offset": 0, 00:16:30.419 "data_size": 65536 00:16:30.419 }, 00:16:30.419 { 00:16:30.419 "name": "BaseBdev3", 00:16:30.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.419 "is_configured": false, 00:16:30.419 "data_offset": 0, 00:16:30.419 "data_size": 0 00:16:30.419 } 00:16:30.419 ] 00:16:30.419 }' 00:16:30.419 09:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.419 09:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.677 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:30.677 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.677 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.677 [2024-10-11 09:50:15.300151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.677 [2024-10-11 09:50:15.300321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:30.677 [2024-10-11 09:50:15.300376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:30.677 [2024-10-11 09:50:15.300715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:30.677 [2024-10-11 09:50:15.307860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:30.677 [2024-10-11 09:50:15.307922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:30.677 [2024-10-11 09:50:15.308330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.986 BaseBdev3 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.986 [ 00:16:30.986 { 00:16:30.986 "name": "BaseBdev3", 00:16:30.986 "aliases": [ 00:16:30.986 "c930e46e-7ccf-49a8-afa5-10c704b873af" 00:16:30.986 ], 00:16:30.986 "product_name": "Malloc disk", 00:16:30.986 "block_size": 512, 00:16:30.986 "num_blocks": 65536, 00:16:30.986 "uuid": "c930e46e-7ccf-49a8-afa5-10c704b873af", 00:16:30.986 "assigned_rate_limits": { 00:16:30.986 "rw_ios_per_sec": 0, 00:16:30.986 "rw_mbytes_per_sec": 0, 00:16:30.986 "r_mbytes_per_sec": 0, 00:16:30.986 "w_mbytes_per_sec": 0 00:16:30.986 }, 00:16:30.986 "claimed": true, 00:16:30.986 "claim_type": "exclusive_write", 00:16:30.986 "zoned": false, 00:16:30.986 "supported_io_types": { 00:16:30.986 "read": true, 00:16:30.986 "write": true, 00:16:30.986 "unmap": true, 00:16:30.986 "flush": true, 00:16:30.986 "reset": true, 00:16:30.986 "nvme_admin": false, 00:16:30.986 "nvme_io": false, 00:16:30.986 "nvme_io_md": false, 00:16:30.986 "write_zeroes": true, 00:16:30.986 "zcopy": true, 00:16:30.986 "get_zone_info": false, 00:16:30.986 "zone_management": false, 00:16:30.986 "zone_append": false, 00:16:30.986 "compare": false, 00:16:30.986 "compare_and_write": false, 00:16:30.986 "abort": true, 00:16:30.986 "seek_hole": false, 00:16:30.986 "seek_data": false, 00:16:30.986 "copy": true, 00:16:30.986 "nvme_iov_md": false 00:16:30.986 }, 00:16:30.986 "memory_domains": [ 00:16:30.986 { 00:16:30.986 "dma_device_id": "system", 00:16:30.986 "dma_device_type": 1 00:16:30.986 }, 00:16:30.986 { 00:16:30.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.986 "dma_device_type": 2 00:16:30.986 } 00:16:30.986 ], 00:16:30.986 "driver_specific": {} 00:16:30.986 } 00:16:30.986 ] 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.986 "name": "Existed_Raid", 00:16:30.986 "uuid": "33586181-9325-446b-910f-d679fa693b3c", 00:16:30.986 "strip_size_kb": 64, 00:16:30.986 "state": "online", 00:16:30.986 "raid_level": "raid5f", 00:16:30.986 "superblock": false, 00:16:30.986 "num_base_bdevs": 3, 00:16:30.986 "num_base_bdevs_discovered": 3, 00:16:30.986 "num_base_bdevs_operational": 3, 00:16:30.986 "base_bdevs_list": [ 00:16:30.986 { 00:16:30.986 "name": "BaseBdev1", 00:16:30.986 "uuid": "0bce51ca-f0bd-48e2-aaeb-7ba67794d942", 00:16:30.986 "is_configured": true, 00:16:30.986 "data_offset": 0, 00:16:30.986 "data_size": 65536 00:16:30.986 }, 00:16:30.986 { 00:16:30.986 "name": "BaseBdev2", 00:16:30.986 "uuid": "37be6da9-a62c-42ba-a9f9-58643433dff4", 00:16:30.986 "is_configured": true, 00:16:30.986 "data_offset": 0, 00:16:30.986 "data_size": 65536 00:16:30.986 }, 00:16:30.986 { 00:16:30.986 "name": "BaseBdev3", 00:16:30.986 "uuid": "c930e46e-7ccf-49a8-afa5-10c704b873af", 00:16:30.986 "is_configured": true, 00:16:30.986 "data_offset": 0, 00:16:30.986 "data_size": 65536 00:16:30.986 } 00:16:30.986 ] 00:16:30.986 }' 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.986 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.245 [2024-10-11 09:50:15.706512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.245 "name": "Existed_Raid", 00:16:31.245 "aliases": [ 00:16:31.245 "33586181-9325-446b-910f-d679fa693b3c" 00:16:31.245 ], 00:16:31.245 "product_name": "Raid Volume", 00:16:31.245 "block_size": 512, 00:16:31.245 "num_blocks": 131072, 00:16:31.245 "uuid": "33586181-9325-446b-910f-d679fa693b3c", 00:16:31.245 "assigned_rate_limits": { 00:16:31.245 "rw_ios_per_sec": 0, 00:16:31.245 "rw_mbytes_per_sec": 0, 00:16:31.245 "r_mbytes_per_sec": 0, 00:16:31.245 "w_mbytes_per_sec": 0 00:16:31.245 }, 00:16:31.245 "claimed": false, 00:16:31.245 "zoned": false, 00:16:31.245 "supported_io_types": { 00:16:31.245 "read": true, 00:16:31.245 "write": true, 00:16:31.245 "unmap": false, 00:16:31.245 "flush": false, 00:16:31.245 "reset": true, 00:16:31.245 "nvme_admin": false, 00:16:31.245 "nvme_io": false, 00:16:31.245 "nvme_io_md": false, 00:16:31.245 "write_zeroes": true, 00:16:31.245 "zcopy": false, 00:16:31.245 "get_zone_info": false, 00:16:31.245 "zone_management": false, 00:16:31.245 "zone_append": false, 00:16:31.245 "compare": false, 00:16:31.245 "compare_and_write": false, 00:16:31.245 "abort": false, 00:16:31.245 "seek_hole": false, 00:16:31.245 "seek_data": false, 00:16:31.245 "copy": false, 00:16:31.245 "nvme_iov_md": false 00:16:31.245 }, 00:16:31.245 "driver_specific": { 00:16:31.245 "raid": { 00:16:31.245 "uuid": "33586181-9325-446b-910f-d679fa693b3c", 00:16:31.245 "strip_size_kb": 64, 00:16:31.245 "state": "online", 00:16:31.245 "raid_level": "raid5f", 00:16:31.245 "superblock": false, 00:16:31.245 "num_base_bdevs": 3, 00:16:31.245 "num_base_bdevs_discovered": 3, 00:16:31.245 "num_base_bdevs_operational": 3, 00:16:31.245 "base_bdevs_list": [ 00:16:31.245 { 00:16:31.245 "name": "BaseBdev1", 00:16:31.245 "uuid": "0bce51ca-f0bd-48e2-aaeb-7ba67794d942", 00:16:31.245 "is_configured": true, 00:16:31.245 "data_offset": 0, 00:16:31.245 "data_size": 65536 00:16:31.245 }, 00:16:31.245 { 00:16:31.245 "name": "BaseBdev2", 00:16:31.245 "uuid": "37be6da9-a62c-42ba-a9f9-58643433dff4", 00:16:31.245 "is_configured": true, 00:16:31.245 "data_offset": 0, 00:16:31.245 "data_size": 65536 00:16:31.245 }, 00:16:31.245 { 00:16:31.245 "name": "BaseBdev3", 00:16:31.245 "uuid": "c930e46e-7ccf-49a8-afa5-10c704b873af", 00:16:31.245 "is_configured": true, 00:16:31.245 "data_offset": 0, 00:16:31.245 "data_size": 65536 00:16:31.245 } 00:16:31.245 ] 00:16:31.245 } 00:16:31.245 } 00:16:31.245 }' 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:31.245 BaseBdev2 00:16:31.245 BaseBdev3' 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.245 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:31.246 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.246 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.246 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.246 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.504 09:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.504 [2024-10-11 09:50:16.001847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.504 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.763 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.763 "name": "Existed_Raid", 00:16:31.763 "uuid": "33586181-9325-446b-910f-d679fa693b3c", 00:16:31.763 "strip_size_kb": 64, 00:16:31.763 "state": "online", 00:16:31.763 "raid_level": "raid5f", 00:16:31.763 "superblock": false, 00:16:31.763 "num_base_bdevs": 3, 00:16:31.763 "num_base_bdevs_discovered": 2, 00:16:31.763 "num_base_bdevs_operational": 2, 00:16:31.763 "base_bdevs_list": [ 00:16:31.763 { 00:16:31.763 "name": null, 00:16:31.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.763 "is_configured": false, 00:16:31.763 "data_offset": 0, 00:16:31.763 "data_size": 65536 00:16:31.763 }, 00:16:31.763 { 00:16:31.763 "name": "BaseBdev2", 00:16:31.763 "uuid": "37be6da9-a62c-42ba-a9f9-58643433dff4", 00:16:31.763 "is_configured": true, 00:16:31.763 "data_offset": 0, 00:16:31.763 "data_size": 65536 00:16:31.763 }, 00:16:31.763 { 00:16:31.763 "name": "BaseBdev3", 00:16:31.763 "uuid": "c930e46e-7ccf-49a8-afa5-10c704b873af", 00:16:31.763 "is_configured": true, 00:16:31.763 "data_offset": 0, 00:16:31.763 "data_size": 65536 00:16:31.763 } 00:16:31.763 ] 00:16:31.763 }' 00:16:31.763 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.763 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.021 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.022 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.022 [2024-10-11 09:50:16.643276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.022 [2024-10-11 09:50:16.643443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.281 [2024-10-11 09:50:16.745330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.281 [2024-10-11 09:50:16.797320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.281 [2024-10-11 09:50:16.797431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.281 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 09:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 BaseBdev2 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 [ 00:16:32.540 { 00:16:32.540 "name": "BaseBdev2", 00:16:32.540 "aliases": [ 00:16:32.540 "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0" 00:16:32.540 ], 00:16:32.540 "product_name": "Malloc disk", 00:16:32.540 "block_size": 512, 00:16:32.540 "num_blocks": 65536, 00:16:32.540 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:32.540 "assigned_rate_limits": { 00:16:32.540 "rw_ios_per_sec": 0, 00:16:32.540 "rw_mbytes_per_sec": 0, 00:16:32.540 "r_mbytes_per_sec": 0, 00:16:32.540 "w_mbytes_per_sec": 0 00:16:32.540 }, 00:16:32.540 "claimed": false, 00:16:32.540 "zoned": false, 00:16:32.540 "supported_io_types": { 00:16:32.540 "read": true, 00:16:32.540 "write": true, 00:16:32.540 "unmap": true, 00:16:32.540 "flush": true, 00:16:32.540 "reset": true, 00:16:32.540 "nvme_admin": false, 00:16:32.540 "nvme_io": false, 00:16:32.540 "nvme_io_md": false, 00:16:32.540 "write_zeroes": true, 00:16:32.540 "zcopy": true, 00:16:32.540 "get_zone_info": false, 00:16:32.540 "zone_management": false, 00:16:32.540 "zone_append": false, 00:16:32.540 "compare": false, 00:16:32.540 "compare_and_write": false, 00:16:32.540 "abort": true, 00:16:32.540 "seek_hole": false, 00:16:32.540 "seek_data": false, 00:16:32.540 "copy": true, 00:16:32.540 "nvme_iov_md": false 00:16:32.540 }, 00:16:32.540 "memory_domains": [ 00:16:32.540 { 00:16:32.540 "dma_device_id": "system", 00:16:32.540 "dma_device_type": 1 00:16:32.540 }, 00:16:32.540 { 00:16:32.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.540 "dma_device_type": 2 00:16:32.540 } 00:16:32.540 ], 00:16:32.540 "driver_specific": {} 00:16:32.540 } 00:16:32.540 ] 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.540 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.540 BaseBdev3 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.541 [ 00:16:32.541 { 00:16:32.541 "name": "BaseBdev3", 00:16:32.541 "aliases": [ 00:16:32.541 "6b92a772-dae9-4d76-8bff-5cd08d0b5f30" 00:16:32.541 ], 00:16:32.541 "product_name": "Malloc disk", 00:16:32.541 "block_size": 512, 00:16:32.541 "num_blocks": 65536, 00:16:32.541 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:32.541 "assigned_rate_limits": { 00:16:32.541 "rw_ios_per_sec": 0, 00:16:32.541 "rw_mbytes_per_sec": 0, 00:16:32.541 "r_mbytes_per_sec": 0, 00:16:32.541 "w_mbytes_per_sec": 0 00:16:32.541 }, 00:16:32.541 "claimed": false, 00:16:32.541 "zoned": false, 00:16:32.541 "supported_io_types": { 00:16:32.541 "read": true, 00:16:32.541 "write": true, 00:16:32.541 "unmap": true, 00:16:32.541 "flush": true, 00:16:32.541 "reset": true, 00:16:32.541 "nvme_admin": false, 00:16:32.541 "nvme_io": false, 00:16:32.541 "nvme_io_md": false, 00:16:32.541 "write_zeroes": true, 00:16:32.541 "zcopy": true, 00:16:32.541 "get_zone_info": false, 00:16:32.541 "zone_management": false, 00:16:32.541 "zone_append": false, 00:16:32.541 "compare": false, 00:16:32.541 "compare_and_write": false, 00:16:32.541 "abort": true, 00:16:32.541 "seek_hole": false, 00:16:32.541 "seek_data": false, 00:16:32.541 "copy": true, 00:16:32.541 "nvme_iov_md": false 00:16:32.541 }, 00:16:32.541 "memory_domains": [ 00:16:32.541 { 00:16:32.541 "dma_device_id": "system", 00:16:32.541 "dma_device_type": 1 00:16:32.541 }, 00:16:32.541 { 00:16:32.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.541 "dma_device_type": 2 00:16:32.541 } 00:16:32.541 ], 00:16:32.541 "driver_specific": {} 00:16:32.541 } 00:16:32.541 ] 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.541 [2024-10-11 09:50:17.135674] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.541 [2024-10-11 09:50:17.135808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.541 [2024-10-11 09:50:17.135868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.541 [2024-10-11 09:50:17.137979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.541 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.799 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.799 "name": "Existed_Raid", 00:16:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.799 "strip_size_kb": 64, 00:16:32.799 "state": "configuring", 00:16:32.799 "raid_level": "raid5f", 00:16:32.799 "superblock": false, 00:16:32.799 "num_base_bdevs": 3, 00:16:32.799 "num_base_bdevs_discovered": 2, 00:16:32.799 "num_base_bdevs_operational": 3, 00:16:32.799 "base_bdevs_list": [ 00:16:32.799 { 00:16:32.799 "name": "BaseBdev1", 00:16:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.799 "is_configured": false, 00:16:32.799 "data_offset": 0, 00:16:32.799 "data_size": 0 00:16:32.799 }, 00:16:32.799 { 00:16:32.799 "name": "BaseBdev2", 00:16:32.799 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:32.799 "is_configured": true, 00:16:32.799 "data_offset": 0, 00:16:32.799 "data_size": 65536 00:16:32.799 }, 00:16:32.799 { 00:16:32.799 "name": "BaseBdev3", 00:16:32.799 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:32.799 "is_configured": true, 00:16:32.799 "data_offset": 0, 00:16:32.799 "data_size": 65536 00:16:32.799 } 00:16:32.799 ] 00:16:32.799 }' 00:16:32.799 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.799 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.057 [2024-10-11 09:50:17.511028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.057 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.057 "name": "Existed_Raid", 00:16:33.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.057 "strip_size_kb": 64, 00:16:33.057 "state": "configuring", 00:16:33.058 "raid_level": "raid5f", 00:16:33.058 "superblock": false, 00:16:33.058 "num_base_bdevs": 3, 00:16:33.058 "num_base_bdevs_discovered": 1, 00:16:33.058 "num_base_bdevs_operational": 3, 00:16:33.058 "base_bdevs_list": [ 00:16:33.058 { 00:16:33.058 "name": "BaseBdev1", 00:16:33.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.058 "is_configured": false, 00:16:33.058 "data_offset": 0, 00:16:33.058 "data_size": 0 00:16:33.058 }, 00:16:33.058 { 00:16:33.058 "name": null, 00:16:33.058 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:33.058 "is_configured": false, 00:16:33.058 "data_offset": 0, 00:16:33.058 "data_size": 65536 00:16:33.058 }, 00:16:33.058 { 00:16:33.058 "name": "BaseBdev3", 00:16:33.058 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:33.058 "is_configured": true, 00:16:33.058 "data_offset": 0, 00:16:33.058 "data_size": 65536 00:16:33.058 } 00:16:33.058 ] 00:16:33.058 }' 00:16:33.058 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.058 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.317 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.575 [2024-10-11 09:50:17.985346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.575 BaseBdev1 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.575 09:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.575 [ 00:16:33.575 { 00:16:33.575 "name": "BaseBdev1", 00:16:33.575 "aliases": [ 00:16:33.575 "ddf3cb9e-b583-4ab6-af5b-88910a43748e" 00:16:33.575 ], 00:16:33.575 "product_name": "Malloc disk", 00:16:33.575 "block_size": 512, 00:16:33.575 "num_blocks": 65536, 00:16:33.575 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:33.575 "assigned_rate_limits": { 00:16:33.575 "rw_ios_per_sec": 0, 00:16:33.575 "rw_mbytes_per_sec": 0, 00:16:33.575 "r_mbytes_per_sec": 0, 00:16:33.575 "w_mbytes_per_sec": 0 00:16:33.575 }, 00:16:33.575 "claimed": true, 00:16:33.575 "claim_type": "exclusive_write", 00:16:33.575 "zoned": false, 00:16:33.575 "supported_io_types": { 00:16:33.575 "read": true, 00:16:33.575 "write": true, 00:16:33.575 "unmap": true, 00:16:33.575 "flush": true, 00:16:33.575 "reset": true, 00:16:33.575 "nvme_admin": false, 00:16:33.575 "nvme_io": false, 00:16:33.575 "nvme_io_md": false, 00:16:33.575 "write_zeroes": true, 00:16:33.575 "zcopy": true, 00:16:33.575 "get_zone_info": false, 00:16:33.575 "zone_management": false, 00:16:33.575 "zone_append": false, 00:16:33.575 "compare": false, 00:16:33.575 "compare_and_write": false, 00:16:33.575 "abort": true, 00:16:33.575 "seek_hole": false, 00:16:33.575 "seek_data": false, 00:16:33.575 "copy": true, 00:16:33.575 "nvme_iov_md": false 00:16:33.575 }, 00:16:33.575 "memory_domains": [ 00:16:33.575 { 00:16:33.575 "dma_device_id": "system", 00:16:33.575 "dma_device_type": 1 00:16:33.575 }, 00:16:33.575 { 00:16:33.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.575 "dma_device_type": 2 00:16:33.575 } 00:16:33.575 ], 00:16:33.575 "driver_specific": {} 00:16:33.575 } 00:16:33.575 ] 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.575 "name": "Existed_Raid", 00:16:33.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.575 "strip_size_kb": 64, 00:16:33.575 "state": "configuring", 00:16:33.575 "raid_level": "raid5f", 00:16:33.575 "superblock": false, 00:16:33.575 "num_base_bdevs": 3, 00:16:33.575 "num_base_bdevs_discovered": 2, 00:16:33.575 "num_base_bdevs_operational": 3, 00:16:33.575 "base_bdevs_list": [ 00:16:33.575 { 00:16:33.575 "name": "BaseBdev1", 00:16:33.575 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:33.575 "is_configured": true, 00:16:33.575 "data_offset": 0, 00:16:33.575 "data_size": 65536 00:16:33.575 }, 00:16:33.575 { 00:16:33.575 "name": null, 00:16:33.575 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:33.575 "is_configured": false, 00:16:33.575 "data_offset": 0, 00:16:33.575 "data_size": 65536 00:16:33.575 }, 00:16:33.575 { 00:16:33.575 "name": "BaseBdev3", 00:16:33.575 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:33.575 "is_configured": true, 00:16:33.575 "data_offset": 0, 00:16:33.575 "data_size": 65536 00:16:33.575 } 00:16:33.575 ] 00:16:33.575 }' 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.575 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.833 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.833 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:33.833 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.833 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.833 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.091 [2024-10-11 09:50:18.468643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.091 "name": "Existed_Raid", 00:16:34.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.091 "strip_size_kb": 64, 00:16:34.091 "state": "configuring", 00:16:34.091 "raid_level": "raid5f", 00:16:34.091 "superblock": false, 00:16:34.091 "num_base_bdevs": 3, 00:16:34.091 "num_base_bdevs_discovered": 1, 00:16:34.091 "num_base_bdevs_operational": 3, 00:16:34.091 "base_bdevs_list": [ 00:16:34.091 { 00:16:34.091 "name": "BaseBdev1", 00:16:34.091 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:34.091 "is_configured": true, 00:16:34.091 "data_offset": 0, 00:16:34.091 "data_size": 65536 00:16:34.091 }, 00:16:34.091 { 00:16:34.091 "name": null, 00:16:34.091 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:34.091 "is_configured": false, 00:16:34.091 "data_offset": 0, 00:16:34.091 "data_size": 65536 00:16:34.091 }, 00:16:34.091 { 00:16:34.091 "name": null, 00:16:34.091 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:34.091 "is_configured": false, 00:16:34.091 "data_offset": 0, 00:16:34.091 "data_size": 65536 00:16:34.091 } 00:16:34.091 ] 00:16:34.091 }' 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.091 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.350 [2024-10-11 09:50:18.923923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.350 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.350 "name": "Existed_Raid", 00:16:34.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.350 "strip_size_kb": 64, 00:16:34.350 "state": "configuring", 00:16:34.351 "raid_level": "raid5f", 00:16:34.351 "superblock": false, 00:16:34.351 "num_base_bdevs": 3, 00:16:34.351 "num_base_bdevs_discovered": 2, 00:16:34.351 "num_base_bdevs_operational": 3, 00:16:34.351 "base_bdevs_list": [ 00:16:34.351 { 00:16:34.351 "name": "BaseBdev1", 00:16:34.351 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:34.351 "is_configured": true, 00:16:34.351 "data_offset": 0, 00:16:34.351 "data_size": 65536 00:16:34.351 }, 00:16:34.351 { 00:16:34.351 "name": null, 00:16:34.351 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:34.351 "is_configured": false, 00:16:34.351 "data_offset": 0, 00:16:34.351 "data_size": 65536 00:16:34.351 }, 00:16:34.351 { 00:16:34.351 "name": "BaseBdev3", 00:16:34.351 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:34.351 "is_configured": true, 00:16:34.351 "data_offset": 0, 00:16:34.351 "data_size": 65536 00:16:34.351 } 00:16:34.351 ] 00:16:34.351 }' 00:16:34.351 09:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.351 09:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.961 [2024-10-11 09:50:19.383266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.961 "name": "Existed_Raid", 00:16:34.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.961 "strip_size_kb": 64, 00:16:34.961 "state": "configuring", 00:16:34.961 "raid_level": "raid5f", 00:16:34.961 "superblock": false, 00:16:34.961 "num_base_bdevs": 3, 00:16:34.961 "num_base_bdevs_discovered": 1, 00:16:34.961 "num_base_bdevs_operational": 3, 00:16:34.961 "base_bdevs_list": [ 00:16:34.961 { 00:16:34.961 "name": null, 00:16:34.961 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:34.961 "is_configured": false, 00:16:34.961 "data_offset": 0, 00:16:34.961 "data_size": 65536 00:16:34.961 }, 00:16:34.961 { 00:16:34.961 "name": null, 00:16:34.961 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:34.961 "is_configured": false, 00:16:34.961 "data_offset": 0, 00:16:34.961 "data_size": 65536 00:16:34.961 }, 00:16:34.961 { 00:16:34.961 "name": "BaseBdev3", 00:16:34.961 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:34.961 "is_configured": true, 00:16:34.961 "data_offset": 0, 00:16:34.961 "data_size": 65536 00:16:34.961 } 00:16:34.961 ] 00:16:34.961 }' 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.961 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.530 09:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.530 [2024-10-11 09:50:19.999535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.530 "name": "Existed_Raid", 00:16:35.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.530 "strip_size_kb": 64, 00:16:35.530 "state": "configuring", 00:16:35.530 "raid_level": "raid5f", 00:16:35.530 "superblock": false, 00:16:35.530 "num_base_bdevs": 3, 00:16:35.530 "num_base_bdevs_discovered": 2, 00:16:35.530 "num_base_bdevs_operational": 3, 00:16:35.530 "base_bdevs_list": [ 00:16:35.530 { 00:16:35.530 "name": null, 00:16:35.530 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:35.530 "is_configured": false, 00:16:35.530 "data_offset": 0, 00:16:35.530 "data_size": 65536 00:16:35.530 }, 00:16:35.530 { 00:16:35.530 "name": "BaseBdev2", 00:16:35.530 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:35.530 "is_configured": true, 00:16:35.530 "data_offset": 0, 00:16:35.530 "data_size": 65536 00:16:35.530 }, 00:16:35.530 { 00:16:35.530 "name": "BaseBdev3", 00:16:35.530 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:35.530 "is_configured": true, 00:16:35.530 "data_offset": 0, 00:16:35.530 "data_size": 65536 00:16:35.530 } 00:16:35.530 ] 00:16:35.530 }' 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.530 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ddf3cb9e-b583-4ab6-af5b-88910a43748e 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 [2024-10-11 09:50:20.595043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:36.098 [2024-10-11 09:50:20.595202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:36.098 [2024-10-11 09:50:20.595252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:36.098 [2024-10-11 09:50:20.595588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:36.098 [2024-10-11 09:50:20.602563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:36.098 [2024-10-11 09:50:20.602632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:36.098 [2024-10-11 09:50:20.603041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.098 NewBaseBdev 00:16:36.098 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.099 [ 00:16:36.099 { 00:16:36.099 "name": "NewBaseBdev", 00:16:36.099 "aliases": [ 00:16:36.099 "ddf3cb9e-b583-4ab6-af5b-88910a43748e" 00:16:36.099 ], 00:16:36.099 "product_name": "Malloc disk", 00:16:36.099 "block_size": 512, 00:16:36.099 "num_blocks": 65536, 00:16:36.099 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:36.099 "assigned_rate_limits": { 00:16:36.099 "rw_ios_per_sec": 0, 00:16:36.099 "rw_mbytes_per_sec": 0, 00:16:36.099 "r_mbytes_per_sec": 0, 00:16:36.099 "w_mbytes_per_sec": 0 00:16:36.099 }, 00:16:36.099 "claimed": true, 00:16:36.099 "claim_type": "exclusive_write", 00:16:36.099 "zoned": false, 00:16:36.099 "supported_io_types": { 00:16:36.099 "read": true, 00:16:36.099 "write": true, 00:16:36.099 "unmap": true, 00:16:36.099 "flush": true, 00:16:36.099 "reset": true, 00:16:36.099 "nvme_admin": false, 00:16:36.099 "nvme_io": false, 00:16:36.099 "nvme_io_md": false, 00:16:36.099 "write_zeroes": true, 00:16:36.099 "zcopy": true, 00:16:36.099 "get_zone_info": false, 00:16:36.099 "zone_management": false, 00:16:36.099 "zone_append": false, 00:16:36.099 "compare": false, 00:16:36.099 "compare_and_write": false, 00:16:36.099 "abort": true, 00:16:36.099 "seek_hole": false, 00:16:36.099 "seek_data": false, 00:16:36.099 "copy": true, 00:16:36.099 "nvme_iov_md": false 00:16:36.099 }, 00:16:36.099 "memory_domains": [ 00:16:36.099 { 00:16:36.099 "dma_device_id": "system", 00:16:36.099 "dma_device_type": 1 00:16:36.099 }, 00:16:36.099 { 00:16:36.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.099 "dma_device_type": 2 00:16:36.099 } 00:16:36.099 ], 00:16:36.099 "driver_specific": {} 00:16:36.099 } 00:16:36.099 ] 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.099 "name": "Existed_Raid", 00:16:36.099 "uuid": "92d79631-9b13-463b-8638-996d039b6353", 00:16:36.099 "strip_size_kb": 64, 00:16:36.099 "state": "online", 00:16:36.099 "raid_level": "raid5f", 00:16:36.099 "superblock": false, 00:16:36.099 "num_base_bdevs": 3, 00:16:36.099 "num_base_bdevs_discovered": 3, 00:16:36.099 "num_base_bdevs_operational": 3, 00:16:36.099 "base_bdevs_list": [ 00:16:36.099 { 00:16:36.099 "name": "NewBaseBdev", 00:16:36.099 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:36.099 "is_configured": true, 00:16:36.099 "data_offset": 0, 00:16:36.099 "data_size": 65536 00:16:36.099 }, 00:16:36.099 { 00:16:36.099 "name": "BaseBdev2", 00:16:36.099 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:36.099 "is_configured": true, 00:16:36.099 "data_offset": 0, 00:16:36.099 "data_size": 65536 00:16:36.099 }, 00:16:36.099 { 00:16:36.099 "name": "BaseBdev3", 00:16:36.099 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:36.099 "is_configured": true, 00:16:36.099 "data_offset": 0, 00:16:36.099 "data_size": 65536 00:16:36.099 } 00:16:36.099 ] 00:16:36.099 }' 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.099 09:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.667 [2024-10-11 09:50:21.093387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.667 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.667 "name": "Existed_Raid", 00:16:36.667 "aliases": [ 00:16:36.667 "92d79631-9b13-463b-8638-996d039b6353" 00:16:36.667 ], 00:16:36.667 "product_name": "Raid Volume", 00:16:36.667 "block_size": 512, 00:16:36.667 "num_blocks": 131072, 00:16:36.667 "uuid": "92d79631-9b13-463b-8638-996d039b6353", 00:16:36.667 "assigned_rate_limits": { 00:16:36.667 "rw_ios_per_sec": 0, 00:16:36.667 "rw_mbytes_per_sec": 0, 00:16:36.667 "r_mbytes_per_sec": 0, 00:16:36.667 "w_mbytes_per_sec": 0 00:16:36.667 }, 00:16:36.668 "claimed": false, 00:16:36.668 "zoned": false, 00:16:36.668 "supported_io_types": { 00:16:36.668 "read": true, 00:16:36.668 "write": true, 00:16:36.668 "unmap": false, 00:16:36.668 "flush": false, 00:16:36.668 "reset": true, 00:16:36.668 "nvme_admin": false, 00:16:36.668 "nvme_io": false, 00:16:36.668 "nvme_io_md": false, 00:16:36.668 "write_zeroes": true, 00:16:36.668 "zcopy": false, 00:16:36.668 "get_zone_info": false, 00:16:36.668 "zone_management": false, 00:16:36.668 "zone_append": false, 00:16:36.668 "compare": false, 00:16:36.668 "compare_and_write": false, 00:16:36.668 "abort": false, 00:16:36.668 "seek_hole": false, 00:16:36.668 "seek_data": false, 00:16:36.668 "copy": false, 00:16:36.668 "nvme_iov_md": false 00:16:36.668 }, 00:16:36.668 "driver_specific": { 00:16:36.668 "raid": { 00:16:36.668 "uuid": "92d79631-9b13-463b-8638-996d039b6353", 00:16:36.668 "strip_size_kb": 64, 00:16:36.668 "state": "online", 00:16:36.668 "raid_level": "raid5f", 00:16:36.668 "superblock": false, 00:16:36.668 "num_base_bdevs": 3, 00:16:36.668 "num_base_bdevs_discovered": 3, 00:16:36.668 "num_base_bdevs_operational": 3, 00:16:36.668 "base_bdevs_list": [ 00:16:36.668 { 00:16:36.668 "name": "NewBaseBdev", 00:16:36.668 "uuid": "ddf3cb9e-b583-4ab6-af5b-88910a43748e", 00:16:36.668 "is_configured": true, 00:16:36.668 "data_offset": 0, 00:16:36.668 "data_size": 65536 00:16:36.668 }, 00:16:36.668 { 00:16:36.668 "name": "BaseBdev2", 00:16:36.668 "uuid": "1bcc7d4c-4245-49a4-aa4a-5a148a62e2d0", 00:16:36.668 "is_configured": true, 00:16:36.668 "data_offset": 0, 00:16:36.668 "data_size": 65536 00:16:36.668 }, 00:16:36.668 { 00:16:36.668 "name": "BaseBdev3", 00:16:36.668 "uuid": "6b92a772-dae9-4d76-8bff-5cd08d0b5f30", 00:16:36.668 "is_configured": true, 00:16:36.668 "data_offset": 0, 00:16:36.668 "data_size": 65536 00:16:36.668 } 00:16:36.668 ] 00:16:36.668 } 00:16:36.668 } 00:16:36.668 }' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:36.668 BaseBdev2 00:16:36.668 BaseBdev3' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.668 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.928 [2024-10-11 09:50:21.384662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.928 [2024-10-11 09:50:21.384756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.928 [2024-10-11 09:50:21.384885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.928 [2024-10-11 09:50:21.385245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.928 [2024-10-11 09:50:21.385308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80465 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80465 ']' 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80465 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80465 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80465' 00:16:36.928 killing process with pid 80465 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80465 00:16:36.928 09:50:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80465 00:16:36.928 [2024-10-11 09:50:21.429970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.188 [2024-10-11 09:50:21.776307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.568 ************************************ 00:16:38.568 END TEST raid5f_state_function_test 00:16:38.568 ************************************ 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:38.568 00:16:38.568 real 0m10.686s 00:16:38.568 user 0m16.841s 00:16:38.568 sys 0m1.788s 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.568 09:50:23 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:38.568 09:50:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.568 09:50:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.568 09:50:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.568 ************************************ 00:16:38.568 START TEST raid5f_state_function_test_sb 00:16:38.568 ************************************ 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:38.568 Process raid pid: 81085 00:16:38.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81085 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81085' 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81085 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81085 ']' 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.568 09:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:38.828 [2024-10-11 09:50:23.217665] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:16:38.828 [2024-10-11 09:50:23.217866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.828 [2024-10-11 09:50:23.383078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.088 [2024-10-11 09:50:23.508043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.391 [2024-10-11 09:50:23.728564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.391 [2024-10-11 09:50:23.728615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.649 [2024-10-11 09:50:24.058716] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.649 [2024-10-11 09:50:24.058879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.649 [2024-10-11 09:50:24.058914] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.649 [2024-10-11 09:50:24.058939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.649 [2024-10-11 09:50:24.058958] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.649 [2024-10-11 09:50:24.058979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.649 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.650 "name": "Existed_Raid", 00:16:39.650 "uuid": "261e6680-94ec-4a55-a406-e818d1fbff61", 00:16:39.650 "strip_size_kb": 64, 00:16:39.650 "state": "configuring", 00:16:39.650 "raid_level": "raid5f", 00:16:39.650 "superblock": true, 00:16:39.650 "num_base_bdevs": 3, 00:16:39.650 "num_base_bdevs_discovered": 0, 00:16:39.650 "num_base_bdevs_operational": 3, 00:16:39.650 "base_bdevs_list": [ 00:16:39.650 { 00:16:39.650 "name": "BaseBdev1", 00:16:39.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.650 "is_configured": false, 00:16:39.650 "data_offset": 0, 00:16:39.650 "data_size": 0 00:16:39.650 }, 00:16:39.650 { 00:16:39.650 "name": "BaseBdev2", 00:16:39.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.650 "is_configured": false, 00:16:39.650 "data_offset": 0, 00:16:39.650 "data_size": 0 00:16:39.650 }, 00:16:39.650 { 00:16:39.650 "name": "BaseBdev3", 00:16:39.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.650 "is_configured": false, 00:16:39.650 "data_offset": 0, 00:16:39.650 "data_size": 0 00:16:39.650 } 00:16:39.650 ] 00:16:39.650 }' 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.650 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.908 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:39.908 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.908 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.908 [2024-10-11 09:50:24.469911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.908 [2024-10-11 09:50:24.470006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:39.908 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.909 [2024-10-11 09:50:24.477921] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.909 [2024-10-11 09:50:24.478017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.909 [2024-10-11 09:50:24.478045] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.909 [2024-10-11 09:50:24.478069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.909 [2024-10-11 09:50:24.478087] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.909 [2024-10-11 09:50:24.478108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.909 [2024-10-11 09:50:24.525714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.909 BaseBdev1 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.909 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.168 [ 00:16:40.168 { 00:16:40.168 "name": "BaseBdev1", 00:16:40.168 "aliases": [ 00:16:40.168 "6e567f02-05d9-43ed-ba49-6e131655c1c7" 00:16:40.168 ], 00:16:40.168 "product_name": "Malloc disk", 00:16:40.168 "block_size": 512, 00:16:40.168 "num_blocks": 65536, 00:16:40.168 "uuid": "6e567f02-05d9-43ed-ba49-6e131655c1c7", 00:16:40.168 "assigned_rate_limits": { 00:16:40.168 "rw_ios_per_sec": 0, 00:16:40.168 "rw_mbytes_per_sec": 0, 00:16:40.168 "r_mbytes_per_sec": 0, 00:16:40.168 "w_mbytes_per_sec": 0 00:16:40.168 }, 00:16:40.168 "claimed": true, 00:16:40.168 "claim_type": "exclusive_write", 00:16:40.168 "zoned": false, 00:16:40.168 "supported_io_types": { 00:16:40.168 "read": true, 00:16:40.168 "write": true, 00:16:40.168 "unmap": true, 00:16:40.168 "flush": true, 00:16:40.168 "reset": true, 00:16:40.168 "nvme_admin": false, 00:16:40.168 "nvme_io": false, 00:16:40.168 "nvme_io_md": false, 00:16:40.168 "write_zeroes": true, 00:16:40.168 "zcopy": true, 00:16:40.168 "get_zone_info": false, 00:16:40.168 "zone_management": false, 00:16:40.168 "zone_append": false, 00:16:40.168 "compare": false, 00:16:40.168 "compare_and_write": false, 00:16:40.168 "abort": true, 00:16:40.168 "seek_hole": false, 00:16:40.168 "seek_data": false, 00:16:40.168 "copy": true, 00:16:40.168 "nvme_iov_md": false 00:16:40.168 }, 00:16:40.168 "memory_domains": [ 00:16:40.168 { 00:16:40.168 "dma_device_id": "system", 00:16:40.168 "dma_device_type": 1 00:16:40.168 }, 00:16:40.168 { 00:16:40.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.168 "dma_device_type": 2 00:16:40.168 } 00:16:40.168 ], 00:16:40.168 "driver_specific": {} 00:16:40.168 } 00:16:40.168 ] 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.168 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.169 "name": "Existed_Raid", 00:16:40.169 "uuid": "04021c14-8bc0-4caa-b50c-1b0a1aeb808d", 00:16:40.169 "strip_size_kb": 64, 00:16:40.169 "state": "configuring", 00:16:40.169 "raid_level": "raid5f", 00:16:40.169 "superblock": true, 00:16:40.169 "num_base_bdevs": 3, 00:16:40.169 "num_base_bdevs_discovered": 1, 00:16:40.169 "num_base_bdevs_operational": 3, 00:16:40.169 "base_bdevs_list": [ 00:16:40.169 { 00:16:40.169 "name": "BaseBdev1", 00:16:40.169 "uuid": "6e567f02-05d9-43ed-ba49-6e131655c1c7", 00:16:40.169 "is_configured": true, 00:16:40.169 "data_offset": 2048, 00:16:40.169 "data_size": 63488 00:16:40.169 }, 00:16:40.169 { 00:16:40.169 "name": "BaseBdev2", 00:16:40.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.169 "is_configured": false, 00:16:40.169 "data_offset": 0, 00:16:40.169 "data_size": 0 00:16:40.169 }, 00:16:40.169 { 00:16:40.169 "name": "BaseBdev3", 00:16:40.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.169 "is_configured": false, 00:16:40.169 "data_offset": 0, 00:16:40.169 "data_size": 0 00:16:40.169 } 00:16:40.169 ] 00:16:40.169 }' 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.169 09:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.427 [2024-10-11 09:50:25.044899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.427 [2024-10-11 09:50:25.045014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.427 [2024-10-11 09:50:25.052955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.427 [2024-10-11 09:50:25.055052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.427 [2024-10-11 09:50:25.055129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.427 [2024-10-11 09:50:25.055165] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.427 [2024-10-11 09:50:25.055188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.427 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.686 "name": "Existed_Raid", 00:16:40.686 "uuid": "d97329ea-6cec-43a9-93d7-08f7a832f18e", 00:16:40.686 "strip_size_kb": 64, 00:16:40.686 "state": "configuring", 00:16:40.686 "raid_level": "raid5f", 00:16:40.686 "superblock": true, 00:16:40.686 "num_base_bdevs": 3, 00:16:40.686 "num_base_bdevs_discovered": 1, 00:16:40.686 "num_base_bdevs_operational": 3, 00:16:40.686 "base_bdevs_list": [ 00:16:40.686 { 00:16:40.686 "name": "BaseBdev1", 00:16:40.686 "uuid": "6e567f02-05d9-43ed-ba49-6e131655c1c7", 00:16:40.686 "is_configured": true, 00:16:40.686 "data_offset": 2048, 00:16:40.686 "data_size": 63488 00:16:40.686 }, 00:16:40.686 { 00:16:40.686 "name": "BaseBdev2", 00:16:40.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.686 "is_configured": false, 00:16:40.686 "data_offset": 0, 00:16:40.686 "data_size": 0 00:16:40.686 }, 00:16:40.686 { 00:16:40.686 "name": "BaseBdev3", 00:16:40.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.686 "is_configured": false, 00:16:40.686 "data_offset": 0, 00:16:40.686 "data_size": 0 00:16:40.686 } 00:16:40.686 ] 00:16:40.686 }' 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.686 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.945 [2024-10-11 09:50:25.490455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.945 BaseBdev2 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.945 [ 00:16:40.945 { 00:16:40.945 "name": "BaseBdev2", 00:16:40.945 "aliases": [ 00:16:40.945 "65fe7dd2-1446-4e16-9426-81715113772d" 00:16:40.945 ], 00:16:40.945 "product_name": "Malloc disk", 00:16:40.945 "block_size": 512, 00:16:40.945 "num_blocks": 65536, 00:16:40.945 "uuid": "65fe7dd2-1446-4e16-9426-81715113772d", 00:16:40.945 "assigned_rate_limits": { 00:16:40.945 "rw_ios_per_sec": 0, 00:16:40.945 "rw_mbytes_per_sec": 0, 00:16:40.945 "r_mbytes_per_sec": 0, 00:16:40.945 "w_mbytes_per_sec": 0 00:16:40.945 }, 00:16:40.945 "claimed": true, 00:16:40.945 "claim_type": "exclusive_write", 00:16:40.945 "zoned": false, 00:16:40.945 "supported_io_types": { 00:16:40.945 "read": true, 00:16:40.945 "write": true, 00:16:40.945 "unmap": true, 00:16:40.945 "flush": true, 00:16:40.945 "reset": true, 00:16:40.945 "nvme_admin": false, 00:16:40.945 "nvme_io": false, 00:16:40.945 "nvme_io_md": false, 00:16:40.945 "write_zeroes": true, 00:16:40.945 "zcopy": true, 00:16:40.945 "get_zone_info": false, 00:16:40.945 "zone_management": false, 00:16:40.945 "zone_append": false, 00:16:40.945 "compare": false, 00:16:40.945 "compare_and_write": false, 00:16:40.945 "abort": true, 00:16:40.945 "seek_hole": false, 00:16:40.945 "seek_data": false, 00:16:40.945 "copy": true, 00:16:40.945 "nvme_iov_md": false 00:16:40.945 }, 00:16:40.945 "memory_domains": [ 00:16:40.945 { 00:16:40.945 "dma_device_id": "system", 00:16:40.945 "dma_device_type": 1 00:16:40.945 }, 00:16:40.945 { 00:16:40.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.945 "dma_device_type": 2 00:16:40.945 } 00:16:40.945 ], 00:16:40.945 "driver_specific": {} 00:16:40.945 } 00:16:40.945 ] 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:40.945 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.946 "name": "Existed_Raid", 00:16:40.946 "uuid": "d97329ea-6cec-43a9-93d7-08f7a832f18e", 00:16:40.946 "strip_size_kb": 64, 00:16:40.946 "state": "configuring", 00:16:40.946 "raid_level": "raid5f", 00:16:40.946 "superblock": true, 00:16:40.946 "num_base_bdevs": 3, 00:16:40.946 "num_base_bdevs_discovered": 2, 00:16:40.946 "num_base_bdevs_operational": 3, 00:16:40.946 "base_bdevs_list": [ 00:16:40.946 { 00:16:40.946 "name": "BaseBdev1", 00:16:40.946 "uuid": "6e567f02-05d9-43ed-ba49-6e131655c1c7", 00:16:40.946 "is_configured": true, 00:16:40.946 "data_offset": 2048, 00:16:40.946 "data_size": 63488 00:16:40.946 }, 00:16:40.946 { 00:16:40.946 "name": "BaseBdev2", 00:16:40.946 "uuid": "65fe7dd2-1446-4e16-9426-81715113772d", 00:16:40.946 "is_configured": true, 00:16:40.946 "data_offset": 2048, 00:16:40.946 "data_size": 63488 00:16:40.946 }, 00:16:40.946 { 00:16:40.946 "name": "BaseBdev3", 00:16:40.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.946 "is_configured": false, 00:16:40.946 "data_offset": 0, 00:16:40.946 "data_size": 0 00:16:40.946 } 00:16:40.946 ] 00:16:40.946 }' 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.946 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 09:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:41.514 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.514 09:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 [2024-10-11 09:50:26.013067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.514 [2024-10-11 09:50:26.013456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:41.514 [2024-10-11 09:50:26.013528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:41.514 BaseBdev3 00:16:41.514 [2024-10-11 09:50:26.014026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 [2024-10-11 09:50:26.021715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:41.514 [2024-10-11 09:50:26.021794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:41.514 [2024-10-11 09:50:26.022155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 [ 00:16:41.514 { 00:16:41.514 "name": "BaseBdev3", 00:16:41.514 "aliases": [ 00:16:41.514 "14c58714-9020-4811-8ab2-a61c6d8ba355" 00:16:41.514 ], 00:16:41.514 "product_name": "Malloc disk", 00:16:41.514 "block_size": 512, 00:16:41.514 "num_blocks": 65536, 00:16:41.514 "uuid": "14c58714-9020-4811-8ab2-a61c6d8ba355", 00:16:41.514 "assigned_rate_limits": { 00:16:41.514 "rw_ios_per_sec": 0, 00:16:41.514 "rw_mbytes_per_sec": 0, 00:16:41.514 "r_mbytes_per_sec": 0, 00:16:41.514 "w_mbytes_per_sec": 0 00:16:41.514 }, 00:16:41.514 "claimed": true, 00:16:41.514 "claim_type": "exclusive_write", 00:16:41.514 "zoned": false, 00:16:41.514 "supported_io_types": { 00:16:41.514 "read": true, 00:16:41.514 "write": true, 00:16:41.514 "unmap": true, 00:16:41.514 "flush": true, 00:16:41.514 "reset": true, 00:16:41.514 "nvme_admin": false, 00:16:41.514 "nvme_io": false, 00:16:41.514 "nvme_io_md": false, 00:16:41.514 "write_zeroes": true, 00:16:41.514 "zcopy": true, 00:16:41.514 "get_zone_info": false, 00:16:41.514 "zone_management": false, 00:16:41.514 "zone_append": false, 00:16:41.514 "compare": false, 00:16:41.514 "compare_and_write": false, 00:16:41.514 "abort": true, 00:16:41.514 "seek_hole": false, 00:16:41.514 "seek_data": false, 00:16:41.514 "copy": true, 00:16:41.514 "nvme_iov_md": false 00:16:41.514 }, 00:16:41.514 "memory_domains": [ 00:16:41.514 { 00:16:41.514 "dma_device_id": "system", 00:16:41.514 "dma_device_type": 1 00:16:41.514 }, 00:16:41.514 { 00:16:41.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.514 "dma_device_type": 2 00:16:41.514 } 00:16:41.514 ], 00:16:41.514 "driver_specific": {} 00:16:41.514 } 00:16:41.514 ] 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.514 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.514 "name": "Existed_Raid", 00:16:41.514 "uuid": "d97329ea-6cec-43a9-93d7-08f7a832f18e", 00:16:41.514 "strip_size_kb": 64, 00:16:41.514 "state": "online", 00:16:41.514 "raid_level": "raid5f", 00:16:41.514 "superblock": true, 00:16:41.514 "num_base_bdevs": 3, 00:16:41.515 "num_base_bdevs_discovered": 3, 00:16:41.515 "num_base_bdevs_operational": 3, 00:16:41.515 "base_bdevs_list": [ 00:16:41.515 { 00:16:41.515 "name": "BaseBdev1", 00:16:41.515 "uuid": "6e567f02-05d9-43ed-ba49-6e131655c1c7", 00:16:41.515 "is_configured": true, 00:16:41.515 "data_offset": 2048, 00:16:41.515 "data_size": 63488 00:16:41.515 }, 00:16:41.515 { 00:16:41.515 "name": "BaseBdev2", 00:16:41.515 "uuid": "65fe7dd2-1446-4e16-9426-81715113772d", 00:16:41.515 "is_configured": true, 00:16:41.515 "data_offset": 2048, 00:16:41.515 "data_size": 63488 00:16:41.515 }, 00:16:41.515 { 00:16:41.515 "name": "BaseBdev3", 00:16:41.515 "uuid": "14c58714-9020-4811-8ab2-a61c6d8ba355", 00:16:41.515 "is_configured": true, 00:16:41.515 "data_offset": 2048, 00:16:41.515 "data_size": 63488 00:16:41.515 } 00:16:41.515 ] 00:16:41.515 }' 00:16:41.515 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.515 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.081 [2024-10-11 09:50:26.508133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.081 "name": "Existed_Raid", 00:16:42.081 "aliases": [ 00:16:42.081 "d97329ea-6cec-43a9-93d7-08f7a832f18e" 00:16:42.081 ], 00:16:42.081 "product_name": "Raid Volume", 00:16:42.081 "block_size": 512, 00:16:42.081 "num_blocks": 126976, 00:16:42.081 "uuid": "d97329ea-6cec-43a9-93d7-08f7a832f18e", 00:16:42.081 "assigned_rate_limits": { 00:16:42.081 "rw_ios_per_sec": 0, 00:16:42.081 "rw_mbytes_per_sec": 0, 00:16:42.081 "r_mbytes_per_sec": 0, 00:16:42.081 "w_mbytes_per_sec": 0 00:16:42.081 }, 00:16:42.081 "claimed": false, 00:16:42.081 "zoned": false, 00:16:42.081 "supported_io_types": { 00:16:42.081 "read": true, 00:16:42.081 "write": true, 00:16:42.081 "unmap": false, 00:16:42.081 "flush": false, 00:16:42.081 "reset": true, 00:16:42.081 "nvme_admin": false, 00:16:42.081 "nvme_io": false, 00:16:42.081 "nvme_io_md": false, 00:16:42.081 "write_zeroes": true, 00:16:42.081 "zcopy": false, 00:16:42.081 "get_zone_info": false, 00:16:42.081 "zone_management": false, 00:16:42.081 "zone_append": false, 00:16:42.081 "compare": false, 00:16:42.081 "compare_and_write": false, 00:16:42.081 "abort": false, 00:16:42.081 "seek_hole": false, 00:16:42.081 "seek_data": false, 00:16:42.081 "copy": false, 00:16:42.081 "nvme_iov_md": false 00:16:42.081 }, 00:16:42.081 "driver_specific": { 00:16:42.081 "raid": { 00:16:42.081 "uuid": "d97329ea-6cec-43a9-93d7-08f7a832f18e", 00:16:42.081 "strip_size_kb": 64, 00:16:42.081 "state": "online", 00:16:42.081 "raid_level": "raid5f", 00:16:42.081 "superblock": true, 00:16:42.081 "num_base_bdevs": 3, 00:16:42.081 "num_base_bdevs_discovered": 3, 00:16:42.081 "num_base_bdevs_operational": 3, 00:16:42.081 "base_bdevs_list": [ 00:16:42.081 { 00:16:42.081 "name": "BaseBdev1", 00:16:42.081 "uuid": "6e567f02-05d9-43ed-ba49-6e131655c1c7", 00:16:42.081 "is_configured": true, 00:16:42.081 "data_offset": 2048, 00:16:42.081 "data_size": 63488 00:16:42.081 }, 00:16:42.081 { 00:16:42.081 "name": "BaseBdev2", 00:16:42.081 "uuid": "65fe7dd2-1446-4e16-9426-81715113772d", 00:16:42.081 "is_configured": true, 00:16:42.081 "data_offset": 2048, 00:16:42.081 "data_size": 63488 00:16:42.081 }, 00:16:42.081 { 00:16:42.081 "name": "BaseBdev3", 00:16:42.081 "uuid": "14c58714-9020-4811-8ab2-a61c6d8ba355", 00:16:42.081 "is_configured": true, 00:16:42.081 "data_offset": 2048, 00:16:42.081 "data_size": 63488 00:16:42.081 } 00:16:42.081 ] 00:16:42.081 } 00:16:42.081 } 00:16:42.081 }' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:42.081 BaseBdev2 00:16:42.081 BaseBdev3' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.081 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.340 [2024-10-11 09:50:26.787446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.340 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.341 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.341 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.341 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.341 "name": "Existed_Raid", 00:16:42.341 "uuid": "d97329ea-6cec-43a9-93d7-08f7a832f18e", 00:16:42.341 "strip_size_kb": 64, 00:16:42.341 "state": "online", 00:16:42.341 "raid_level": "raid5f", 00:16:42.341 "superblock": true, 00:16:42.341 "num_base_bdevs": 3, 00:16:42.341 "num_base_bdevs_discovered": 2, 00:16:42.341 "num_base_bdevs_operational": 2, 00:16:42.341 "base_bdevs_list": [ 00:16:42.341 { 00:16:42.341 "name": null, 00:16:42.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.341 "is_configured": false, 00:16:42.341 "data_offset": 0, 00:16:42.341 "data_size": 63488 00:16:42.341 }, 00:16:42.341 { 00:16:42.341 "name": "BaseBdev2", 00:16:42.341 "uuid": "65fe7dd2-1446-4e16-9426-81715113772d", 00:16:42.341 "is_configured": true, 00:16:42.341 "data_offset": 2048, 00:16:42.341 "data_size": 63488 00:16:42.341 }, 00:16:42.341 { 00:16:42.341 "name": "BaseBdev3", 00:16:42.341 "uuid": "14c58714-9020-4811-8ab2-a61c6d8ba355", 00:16:42.341 "is_configured": true, 00:16:42.341 "data_offset": 2048, 00:16:42.341 "data_size": 63488 00:16:42.341 } 00:16:42.341 ] 00:16:42.341 }' 00:16:42.341 09:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.341 09:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.908 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:42.908 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:42.908 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.909 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.909 [2024-10-11 09:50:27.457062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:42.909 [2024-10-11 09:50:27.457290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.168 [2024-10-11 09:50:27.562602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.168 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.169 [2024-10-11 09:50:27.618564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:43.169 [2024-10-11 09:50:27.618669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.169 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.428 BaseBdev2 00:16:43.428 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.428 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:43.428 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:43.428 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.428 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 [ 00:16:43.429 { 00:16:43.429 "name": "BaseBdev2", 00:16:43.429 "aliases": [ 00:16:43.429 "20d6e73a-b45f-40bd-afd7-8311714a7514" 00:16:43.429 ], 00:16:43.429 "product_name": "Malloc disk", 00:16:43.429 "block_size": 512, 00:16:43.429 "num_blocks": 65536, 00:16:43.429 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:43.429 "assigned_rate_limits": { 00:16:43.429 "rw_ios_per_sec": 0, 00:16:43.429 "rw_mbytes_per_sec": 0, 00:16:43.429 "r_mbytes_per_sec": 0, 00:16:43.429 "w_mbytes_per_sec": 0 00:16:43.429 }, 00:16:43.429 "claimed": false, 00:16:43.429 "zoned": false, 00:16:43.429 "supported_io_types": { 00:16:43.429 "read": true, 00:16:43.429 "write": true, 00:16:43.429 "unmap": true, 00:16:43.429 "flush": true, 00:16:43.429 "reset": true, 00:16:43.429 "nvme_admin": false, 00:16:43.429 "nvme_io": false, 00:16:43.429 "nvme_io_md": false, 00:16:43.429 "write_zeroes": true, 00:16:43.429 "zcopy": true, 00:16:43.429 "get_zone_info": false, 00:16:43.429 "zone_management": false, 00:16:43.429 "zone_append": false, 00:16:43.429 "compare": false, 00:16:43.429 "compare_and_write": false, 00:16:43.429 "abort": true, 00:16:43.429 "seek_hole": false, 00:16:43.429 "seek_data": false, 00:16:43.429 "copy": true, 00:16:43.429 "nvme_iov_md": false 00:16:43.429 }, 00:16:43.429 "memory_domains": [ 00:16:43.429 { 00:16:43.429 "dma_device_id": "system", 00:16:43.429 "dma_device_type": 1 00:16:43.429 }, 00:16:43.429 { 00:16:43.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.429 "dma_device_type": 2 00:16:43.429 } 00:16:43.429 ], 00:16:43.429 "driver_specific": {} 00:16:43.429 } 00:16:43.429 ] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 BaseBdev3 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 [ 00:16:43.429 { 00:16:43.429 "name": "BaseBdev3", 00:16:43.429 "aliases": [ 00:16:43.429 "5d7bb275-70d2-49bd-9e00-829385c465c6" 00:16:43.429 ], 00:16:43.429 "product_name": "Malloc disk", 00:16:43.429 "block_size": 512, 00:16:43.429 "num_blocks": 65536, 00:16:43.429 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:43.429 "assigned_rate_limits": { 00:16:43.429 "rw_ios_per_sec": 0, 00:16:43.429 "rw_mbytes_per_sec": 0, 00:16:43.429 "r_mbytes_per_sec": 0, 00:16:43.429 "w_mbytes_per_sec": 0 00:16:43.429 }, 00:16:43.429 "claimed": false, 00:16:43.429 "zoned": false, 00:16:43.429 "supported_io_types": { 00:16:43.429 "read": true, 00:16:43.429 "write": true, 00:16:43.429 "unmap": true, 00:16:43.429 "flush": true, 00:16:43.429 "reset": true, 00:16:43.429 "nvme_admin": false, 00:16:43.429 "nvme_io": false, 00:16:43.429 "nvme_io_md": false, 00:16:43.429 "write_zeroes": true, 00:16:43.429 "zcopy": true, 00:16:43.429 "get_zone_info": false, 00:16:43.429 "zone_management": false, 00:16:43.429 "zone_append": false, 00:16:43.429 "compare": false, 00:16:43.429 "compare_and_write": false, 00:16:43.429 "abort": true, 00:16:43.429 "seek_hole": false, 00:16:43.429 "seek_data": false, 00:16:43.429 "copy": true, 00:16:43.429 "nvme_iov_md": false 00:16:43.429 }, 00:16:43.429 "memory_domains": [ 00:16:43.429 { 00:16:43.429 "dma_device_id": "system", 00:16:43.429 "dma_device_type": 1 00:16:43.429 }, 00:16:43.429 { 00:16:43.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.429 "dma_device_type": 2 00:16:43.429 } 00:16:43.429 ], 00:16:43.429 "driver_specific": {} 00:16:43.429 } 00:16:43.429 ] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 [2024-10-11 09:50:27.921153] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.429 [2024-10-11 09:50:27.921269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.429 [2024-10-11 09:50:27.921322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.429 [2024-10-11 09:50:27.923418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.429 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.429 "name": "Existed_Raid", 00:16:43.429 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:43.429 "strip_size_kb": 64, 00:16:43.429 "state": "configuring", 00:16:43.429 "raid_level": "raid5f", 00:16:43.429 "superblock": true, 00:16:43.429 "num_base_bdevs": 3, 00:16:43.429 "num_base_bdevs_discovered": 2, 00:16:43.429 "num_base_bdevs_operational": 3, 00:16:43.429 "base_bdevs_list": [ 00:16:43.429 { 00:16:43.429 "name": "BaseBdev1", 00:16:43.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.429 "is_configured": false, 00:16:43.429 "data_offset": 0, 00:16:43.429 "data_size": 0 00:16:43.429 }, 00:16:43.429 { 00:16:43.429 "name": "BaseBdev2", 00:16:43.429 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:43.429 "is_configured": true, 00:16:43.429 "data_offset": 2048, 00:16:43.429 "data_size": 63488 00:16:43.429 }, 00:16:43.429 { 00:16:43.429 "name": "BaseBdev3", 00:16:43.429 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:43.429 "is_configured": true, 00:16:43.429 "data_offset": 2048, 00:16:43.429 "data_size": 63488 00:16:43.430 } 00:16:43.430 ] 00:16:43.430 }' 00:16:43.430 09:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.430 09:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.049 [2024-10-11 09:50:28.408348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.049 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.049 "name": "Existed_Raid", 00:16:44.049 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:44.049 "strip_size_kb": 64, 00:16:44.049 "state": "configuring", 00:16:44.050 "raid_level": "raid5f", 00:16:44.050 "superblock": true, 00:16:44.050 "num_base_bdevs": 3, 00:16:44.050 "num_base_bdevs_discovered": 1, 00:16:44.050 "num_base_bdevs_operational": 3, 00:16:44.050 "base_bdevs_list": [ 00:16:44.050 { 00:16:44.050 "name": "BaseBdev1", 00:16:44.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.050 "is_configured": false, 00:16:44.050 "data_offset": 0, 00:16:44.050 "data_size": 0 00:16:44.050 }, 00:16:44.050 { 00:16:44.050 "name": null, 00:16:44.050 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:44.050 "is_configured": false, 00:16:44.050 "data_offset": 0, 00:16:44.050 "data_size": 63488 00:16:44.050 }, 00:16:44.050 { 00:16:44.050 "name": "BaseBdev3", 00:16:44.050 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:44.050 "is_configured": true, 00:16:44.050 "data_offset": 2048, 00:16:44.050 "data_size": 63488 00:16:44.050 } 00:16:44.050 ] 00:16:44.050 }' 00:16:44.050 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.050 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.309 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 [2024-10-11 09:50:28.945910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.569 BaseBdev1 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 [ 00:16:44.569 { 00:16:44.569 "name": "BaseBdev1", 00:16:44.569 "aliases": [ 00:16:44.569 "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50" 00:16:44.569 ], 00:16:44.569 "product_name": "Malloc disk", 00:16:44.569 "block_size": 512, 00:16:44.569 "num_blocks": 65536, 00:16:44.569 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:44.569 "assigned_rate_limits": { 00:16:44.569 "rw_ios_per_sec": 0, 00:16:44.569 "rw_mbytes_per_sec": 0, 00:16:44.569 "r_mbytes_per_sec": 0, 00:16:44.569 "w_mbytes_per_sec": 0 00:16:44.569 }, 00:16:44.569 "claimed": true, 00:16:44.569 "claim_type": "exclusive_write", 00:16:44.569 "zoned": false, 00:16:44.569 "supported_io_types": { 00:16:44.569 "read": true, 00:16:44.569 "write": true, 00:16:44.569 "unmap": true, 00:16:44.569 "flush": true, 00:16:44.569 "reset": true, 00:16:44.569 "nvme_admin": false, 00:16:44.569 "nvme_io": false, 00:16:44.569 "nvme_io_md": false, 00:16:44.569 "write_zeroes": true, 00:16:44.569 "zcopy": true, 00:16:44.569 "get_zone_info": false, 00:16:44.569 "zone_management": false, 00:16:44.569 "zone_append": false, 00:16:44.569 "compare": false, 00:16:44.569 "compare_and_write": false, 00:16:44.569 "abort": true, 00:16:44.569 "seek_hole": false, 00:16:44.569 "seek_data": false, 00:16:44.569 "copy": true, 00:16:44.569 "nvme_iov_md": false 00:16:44.569 }, 00:16:44.569 "memory_domains": [ 00:16:44.569 { 00:16:44.569 "dma_device_id": "system", 00:16:44.569 "dma_device_type": 1 00:16:44.569 }, 00:16:44.569 { 00:16:44.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.569 "dma_device_type": 2 00:16:44.569 } 00:16:44.569 ], 00:16:44.569 "driver_specific": {} 00:16:44.569 } 00:16:44.569 ] 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 09:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.569 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.569 "name": "Existed_Raid", 00:16:44.569 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:44.569 "strip_size_kb": 64, 00:16:44.569 "state": "configuring", 00:16:44.569 "raid_level": "raid5f", 00:16:44.569 "superblock": true, 00:16:44.569 "num_base_bdevs": 3, 00:16:44.569 "num_base_bdevs_discovered": 2, 00:16:44.569 "num_base_bdevs_operational": 3, 00:16:44.569 "base_bdevs_list": [ 00:16:44.569 { 00:16:44.569 "name": "BaseBdev1", 00:16:44.569 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:44.569 "is_configured": true, 00:16:44.569 "data_offset": 2048, 00:16:44.569 "data_size": 63488 00:16:44.569 }, 00:16:44.569 { 00:16:44.569 "name": null, 00:16:44.569 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:44.569 "is_configured": false, 00:16:44.569 "data_offset": 0, 00:16:44.569 "data_size": 63488 00:16:44.569 }, 00:16:44.569 { 00:16:44.569 "name": "BaseBdev3", 00:16:44.569 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:44.569 "is_configured": true, 00:16:44.569 "data_offset": 2048, 00:16:44.569 "data_size": 63488 00:16:44.569 } 00:16:44.569 ] 00:16:44.569 }' 00:16:44.569 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.569 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.828 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.828 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.828 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.828 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.087 [2024-10-11 09:50:29.481203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.087 "name": "Existed_Raid", 00:16:45.087 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:45.087 "strip_size_kb": 64, 00:16:45.087 "state": "configuring", 00:16:45.087 "raid_level": "raid5f", 00:16:45.087 "superblock": true, 00:16:45.087 "num_base_bdevs": 3, 00:16:45.087 "num_base_bdevs_discovered": 1, 00:16:45.087 "num_base_bdevs_operational": 3, 00:16:45.087 "base_bdevs_list": [ 00:16:45.087 { 00:16:45.087 "name": "BaseBdev1", 00:16:45.087 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:45.087 "is_configured": true, 00:16:45.087 "data_offset": 2048, 00:16:45.087 "data_size": 63488 00:16:45.087 }, 00:16:45.087 { 00:16:45.087 "name": null, 00:16:45.087 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:45.087 "is_configured": false, 00:16:45.087 "data_offset": 0, 00:16:45.087 "data_size": 63488 00:16:45.087 }, 00:16:45.087 { 00:16:45.087 "name": null, 00:16:45.087 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:45.087 "is_configured": false, 00:16:45.087 "data_offset": 0, 00:16:45.087 "data_size": 63488 00:16:45.087 } 00:16:45.087 ] 00:16:45.087 }' 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.087 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.347 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.347 [2024-10-11 09:50:29.976392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.607 09:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.607 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.607 "name": "Existed_Raid", 00:16:45.607 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:45.607 "strip_size_kb": 64, 00:16:45.607 "state": "configuring", 00:16:45.607 "raid_level": "raid5f", 00:16:45.607 "superblock": true, 00:16:45.607 "num_base_bdevs": 3, 00:16:45.607 "num_base_bdevs_discovered": 2, 00:16:45.607 "num_base_bdevs_operational": 3, 00:16:45.607 "base_bdevs_list": [ 00:16:45.607 { 00:16:45.607 "name": "BaseBdev1", 00:16:45.607 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:45.607 "is_configured": true, 00:16:45.607 "data_offset": 2048, 00:16:45.607 "data_size": 63488 00:16:45.607 }, 00:16:45.607 { 00:16:45.607 "name": null, 00:16:45.607 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:45.607 "is_configured": false, 00:16:45.607 "data_offset": 0, 00:16:45.607 "data_size": 63488 00:16:45.607 }, 00:16:45.607 { 00:16:45.607 "name": "BaseBdev3", 00:16:45.607 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:45.607 "is_configured": true, 00:16:45.607 "data_offset": 2048, 00:16:45.607 "data_size": 63488 00:16:45.607 } 00:16:45.607 ] 00:16:45.607 }' 00:16:45.607 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.607 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.867 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.867 [2024-10-11 09:50:30.491588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.127 "name": "Existed_Raid", 00:16:46.127 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:46.127 "strip_size_kb": 64, 00:16:46.127 "state": "configuring", 00:16:46.127 "raid_level": "raid5f", 00:16:46.127 "superblock": true, 00:16:46.127 "num_base_bdevs": 3, 00:16:46.127 "num_base_bdevs_discovered": 1, 00:16:46.127 "num_base_bdevs_operational": 3, 00:16:46.127 "base_bdevs_list": [ 00:16:46.127 { 00:16:46.127 "name": null, 00:16:46.127 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:46.127 "is_configured": false, 00:16:46.127 "data_offset": 0, 00:16:46.127 "data_size": 63488 00:16:46.127 }, 00:16:46.127 { 00:16:46.127 "name": null, 00:16:46.127 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:46.127 "is_configured": false, 00:16:46.127 "data_offset": 0, 00:16:46.127 "data_size": 63488 00:16:46.127 }, 00:16:46.127 { 00:16:46.127 "name": "BaseBdev3", 00:16:46.127 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:46.127 "is_configured": true, 00:16:46.127 "data_offset": 2048, 00:16:46.127 "data_size": 63488 00:16:46.127 } 00:16:46.127 ] 00:16:46.127 }' 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.127 09:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.697 [2024-10-11 09:50:31.079172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.697 "name": "Existed_Raid", 00:16:46.697 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:46.697 "strip_size_kb": 64, 00:16:46.697 "state": "configuring", 00:16:46.697 "raid_level": "raid5f", 00:16:46.697 "superblock": true, 00:16:46.697 "num_base_bdevs": 3, 00:16:46.697 "num_base_bdevs_discovered": 2, 00:16:46.697 "num_base_bdevs_operational": 3, 00:16:46.697 "base_bdevs_list": [ 00:16:46.697 { 00:16:46.697 "name": null, 00:16:46.697 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:46.697 "is_configured": false, 00:16:46.697 "data_offset": 0, 00:16:46.697 "data_size": 63488 00:16:46.697 }, 00:16:46.697 { 00:16:46.697 "name": "BaseBdev2", 00:16:46.697 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:46.697 "is_configured": true, 00:16:46.697 "data_offset": 2048, 00:16:46.697 "data_size": 63488 00:16:46.697 }, 00:16:46.697 { 00:16:46.697 "name": "BaseBdev3", 00:16:46.697 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:46.697 "is_configured": true, 00:16:46.697 "data_offset": 2048, 00:16:46.697 "data_size": 63488 00:16:46.697 } 00:16:46.697 ] 00:16:46.697 }' 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.697 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.957 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9520d40b-1a3b-4f8a-b5eb-9912a3c48b50 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.217 [2024-10-11 09:50:31.637319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:47.217 NewBaseBdev 00:16:47.217 [2024-10-11 09:50:31.637692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:47.217 [2024-10-11 09:50:31.637718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:47.217 [2024-10-11 09:50:31.638047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.217 [2024-10-11 09:50:31.643566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:47.217 [2024-10-11 09:50:31.643625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:47.217 [2024-10-11 09:50:31.643936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.217 [ 00:16:47.217 { 00:16:47.217 "name": "NewBaseBdev", 00:16:47.217 "aliases": [ 00:16:47.217 "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50" 00:16:47.217 ], 00:16:47.217 "product_name": "Malloc disk", 00:16:47.217 "block_size": 512, 00:16:47.217 "num_blocks": 65536, 00:16:47.217 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:47.217 "assigned_rate_limits": { 00:16:47.217 "rw_ios_per_sec": 0, 00:16:47.217 "rw_mbytes_per_sec": 0, 00:16:47.217 "r_mbytes_per_sec": 0, 00:16:47.217 "w_mbytes_per_sec": 0 00:16:47.217 }, 00:16:47.217 "claimed": true, 00:16:47.217 "claim_type": "exclusive_write", 00:16:47.217 "zoned": false, 00:16:47.217 "supported_io_types": { 00:16:47.217 "read": true, 00:16:47.217 "write": true, 00:16:47.217 "unmap": true, 00:16:47.217 "flush": true, 00:16:47.217 "reset": true, 00:16:47.217 "nvme_admin": false, 00:16:47.217 "nvme_io": false, 00:16:47.217 "nvme_io_md": false, 00:16:47.217 "write_zeroes": true, 00:16:47.217 "zcopy": true, 00:16:47.217 "get_zone_info": false, 00:16:47.217 "zone_management": false, 00:16:47.217 "zone_append": false, 00:16:47.217 "compare": false, 00:16:47.217 "compare_and_write": false, 00:16:47.217 "abort": true, 00:16:47.217 "seek_hole": false, 00:16:47.217 "seek_data": false, 00:16:47.217 "copy": true, 00:16:47.217 "nvme_iov_md": false 00:16:47.217 }, 00:16:47.217 "memory_domains": [ 00:16:47.217 { 00:16:47.217 "dma_device_id": "system", 00:16:47.217 "dma_device_type": 1 00:16:47.217 }, 00:16:47.217 { 00:16:47.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.217 "dma_device_type": 2 00:16:47.217 } 00:16:47.217 ], 00:16:47.217 "driver_specific": {} 00:16:47.217 } 00:16:47.217 ] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.217 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.217 "name": "Existed_Raid", 00:16:47.217 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:47.217 "strip_size_kb": 64, 00:16:47.217 "state": "online", 00:16:47.217 "raid_level": "raid5f", 00:16:47.217 "superblock": true, 00:16:47.217 "num_base_bdevs": 3, 00:16:47.218 "num_base_bdevs_discovered": 3, 00:16:47.218 "num_base_bdevs_operational": 3, 00:16:47.218 "base_bdevs_list": [ 00:16:47.218 { 00:16:47.218 "name": "NewBaseBdev", 00:16:47.218 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:47.218 "is_configured": true, 00:16:47.218 "data_offset": 2048, 00:16:47.218 "data_size": 63488 00:16:47.218 }, 00:16:47.218 { 00:16:47.218 "name": "BaseBdev2", 00:16:47.218 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:47.218 "is_configured": true, 00:16:47.218 "data_offset": 2048, 00:16:47.218 "data_size": 63488 00:16:47.218 }, 00:16:47.218 { 00:16:47.218 "name": "BaseBdev3", 00:16:47.218 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:47.218 "is_configured": true, 00:16:47.218 "data_offset": 2048, 00:16:47.218 "data_size": 63488 00:16:47.218 } 00:16:47.218 ] 00:16:47.218 }' 00:16:47.218 09:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.218 09:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.478 [2024-10-11 09:50:32.085138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.478 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.737 "name": "Existed_Raid", 00:16:47.737 "aliases": [ 00:16:47.737 "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f" 00:16:47.737 ], 00:16:47.737 "product_name": "Raid Volume", 00:16:47.737 "block_size": 512, 00:16:47.737 "num_blocks": 126976, 00:16:47.737 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:47.737 "assigned_rate_limits": { 00:16:47.737 "rw_ios_per_sec": 0, 00:16:47.737 "rw_mbytes_per_sec": 0, 00:16:47.737 "r_mbytes_per_sec": 0, 00:16:47.737 "w_mbytes_per_sec": 0 00:16:47.737 }, 00:16:47.737 "claimed": false, 00:16:47.737 "zoned": false, 00:16:47.737 "supported_io_types": { 00:16:47.737 "read": true, 00:16:47.737 "write": true, 00:16:47.737 "unmap": false, 00:16:47.737 "flush": false, 00:16:47.737 "reset": true, 00:16:47.737 "nvme_admin": false, 00:16:47.737 "nvme_io": false, 00:16:47.737 "nvme_io_md": false, 00:16:47.737 "write_zeroes": true, 00:16:47.737 "zcopy": false, 00:16:47.737 "get_zone_info": false, 00:16:47.737 "zone_management": false, 00:16:47.737 "zone_append": false, 00:16:47.737 "compare": false, 00:16:47.737 "compare_and_write": false, 00:16:47.737 "abort": false, 00:16:47.737 "seek_hole": false, 00:16:47.737 "seek_data": false, 00:16:47.737 "copy": false, 00:16:47.737 "nvme_iov_md": false 00:16:47.737 }, 00:16:47.737 "driver_specific": { 00:16:47.737 "raid": { 00:16:47.737 "uuid": "2fb62d25-1f39-40f7-9bf5-beeb2ff6341f", 00:16:47.737 "strip_size_kb": 64, 00:16:47.737 "state": "online", 00:16:47.737 "raid_level": "raid5f", 00:16:47.737 "superblock": true, 00:16:47.737 "num_base_bdevs": 3, 00:16:47.737 "num_base_bdevs_discovered": 3, 00:16:47.737 "num_base_bdevs_operational": 3, 00:16:47.737 "base_bdevs_list": [ 00:16:47.737 { 00:16:47.737 "name": "NewBaseBdev", 00:16:47.737 "uuid": "9520d40b-1a3b-4f8a-b5eb-9912a3c48b50", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 2048, 00:16:47.737 "data_size": 63488 00:16:47.737 }, 00:16:47.737 { 00:16:47.737 "name": "BaseBdev2", 00:16:47.737 "uuid": "20d6e73a-b45f-40bd-afd7-8311714a7514", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 2048, 00:16:47.737 "data_size": 63488 00:16:47.737 }, 00:16:47.737 { 00:16:47.737 "name": "BaseBdev3", 00:16:47.737 "uuid": "5d7bb275-70d2-49bd-9e00-829385c465c6", 00:16:47.737 "is_configured": true, 00:16:47.737 "data_offset": 2048, 00:16:47.737 "data_size": 63488 00:16:47.737 } 00:16:47.737 ] 00:16:47.737 } 00:16:47.737 } 00:16:47.737 }' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:47.737 BaseBdev2 00:16:47.737 BaseBdev3' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.737 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.738 [2024-10-11 09:50:32.324503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.738 [2024-10-11 09:50:32.324571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.738 [2024-10-11 09:50:32.324674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.738 [2024-10-11 09:50:32.325002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.738 [2024-10-11 09:50:32.325064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81085 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81085 ']' 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81085 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.738 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81085 00:16:47.997 killing process with pid 81085 00:16:47.997 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.997 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.997 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81085' 00:16:47.997 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81085 00:16:47.997 09:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81085 00:16:47.997 [2024-10-11 09:50:32.369134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.256 [2024-10-11 09:50:32.674645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.224 09:50:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:49.224 00:16:49.224 real 0m10.654s 00:16:49.224 user 0m17.019s 00:16:49.224 sys 0m1.822s 00:16:49.224 09:50:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.224 09:50:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.224 ************************************ 00:16:49.224 END TEST raid5f_state_function_test_sb 00:16:49.224 ************************************ 00:16:49.224 09:50:33 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:49.224 09:50:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:49.224 09:50:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.224 09:50:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.224 ************************************ 00:16:49.224 START TEST raid5f_superblock_test 00:16:49.224 ************************************ 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81709 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81709 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81709 ']' 00:16:49.224 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.483 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.483 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.483 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.483 09:50:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.483 [2024-10-11 09:50:33.924916] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:16:49.483 [2024-10-11 09:50:33.925155] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81709 ] 00:16:49.483 [2024-10-11 09:50:34.089921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.742 [2024-10-11 09:50:34.219627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.002 [2024-10-11 09:50:34.449002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.002 [2024-10-11 09:50:34.449154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.261 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 malloc1 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 [2024-10-11 09:50:34.856206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.262 [2024-10-11 09:50:34.856351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.262 [2024-10-11 09:50:34.856411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.262 [2024-10-11 09:50:34.856497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.262 [2024-10-11 09:50:34.858929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.262 [2024-10-11 09:50:34.859010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.262 pt1 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.262 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.522 malloc2 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.522 [2024-10-11 09:50:34.919393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.522 [2024-10-11 09:50:34.919533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.522 [2024-10-11 09:50:34.919576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.522 [2024-10-11 09:50:34.919627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.522 [2024-10-11 09:50:34.921988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.522 [2024-10-11 09:50:34.922061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.522 pt2 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.522 malloc3 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.522 [2024-10-11 09:50:34.995943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.522 [2024-10-11 09:50:34.996055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.522 [2024-10-11 09:50:34.996097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:50.522 [2024-10-11 09:50:34.996129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.522 [2024-10-11 09:50:34.998386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.522 [2024-10-11 09:50:34.998473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.522 pt3 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.522 09:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.522 [2024-10-11 09:50:35.007981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.522 [2024-10-11 09:50:35.009878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.522 [2024-10-11 09:50:35.010000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.522 [2024-10-11 09:50:35.010197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.522 [2024-10-11 09:50:35.010250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:50.522 [2024-10-11 09:50:35.010556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:50.522 [2024-10-11 09:50:35.016495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.522 [2024-10-11 09:50:35.016549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.522 [2024-10-11 09:50:35.016837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.522 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.522 "name": "raid_bdev1", 00:16:50.522 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:50.522 "strip_size_kb": 64, 00:16:50.522 "state": "online", 00:16:50.522 "raid_level": "raid5f", 00:16:50.522 "superblock": true, 00:16:50.522 "num_base_bdevs": 3, 00:16:50.522 "num_base_bdevs_discovered": 3, 00:16:50.522 "num_base_bdevs_operational": 3, 00:16:50.522 "base_bdevs_list": [ 00:16:50.522 { 00:16:50.522 "name": "pt1", 00:16:50.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.522 "is_configured": true, 00:16:50.522 "data_offset": 2048, 00:16:50.522 "data_size": 63488 00:16:50.522 }, 00:16:50.522 { 00:16:50.522 "name": "pt2", 00:16:50.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.523 "is_configured": true, 00:16:50.523 "data_offset": 2048, 00:16:50.523 "data_size": 63488 00:16:50.523 }, 00:16:50.523 { 00:16:50.523 "name": "pt3", 00:16:50.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.523 "is_configured": true, 00:16:50.523 "data_offset": 2048, 00:16:50.523 "data_size": 63488 00:16:50.523 } 00:16:50.523 ] 00:16:50.523 }' 00:16:50.523 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.523 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.092 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.093 [2024-10-11 09:50:35.454209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.093 "name": "raid_bdev1", 00:16:51.093 "aliases": [ 00:16:51.093 "51abdc95-ad35-40bd-8d2e-98a86235826f" 00:16:51.093 ], 00:16:51.093 "product_name": "Raid Volume", 00:16:51.093 "block_size": 512, 00:16:51.093 "num_blocks": 126976, 00:16:51.093 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:51.093 "assigned_rate_limits": { 00:16:51.093 "rw_ios_per_sec": 0, 00:16:51.093 "rw_mbytes_per_sec": 0, 00:16:51.093 "r_mbytes_per_sec": 0, 00:16:51.093 "w_mbytes_per_sec": 0 00:16:51.093 }, 00:16:51.093 "claimed": false, 00:16:51.093 "zoned": false, 00:16:51.093 "supported_io_types": { 00:16:51.093 "read": true, 00:16:51.093 "write": true, 00:16:51.093 "unmap": false, 00:16:51.093 "flush": false, 00:16:51.093 "reset": true, 00:16:51.093 "nvme_admin": false, 00:16:51.093 "nvme_io": false, 00:16:51.093 "nvme_io_md": false, 00:16:51.093 "write_zeroes": true, 00:16:51.093 "zcopy": false, 00:16:51.093 "get_zone_info": false, 00:16:51.093 "zone_management": false, 00:16:51.093 "zone_append": false, 00:16:51.093 "compare": false, 00:16:51.093 "compare_and_write": false, 00:16:51.093 "abort": false, 00:16:51.093 "seek_hole": false, 00:16:51.093 "seek_data": false, 00:16:51.093 "copy": false, 00:16:51.093 "nvme_iov_md": false 00:16:51.093 }, 00:16:51.093 "driver_specific": { 00:16:51.093 "raid": { 00:16:51.093 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:51.093 "strip_size_kb": 64, 00:16:51.093 "state": "online", 00:16:51.093 "raid_level": "raid5f", 00:16:51.093 "superblock": true, 00:16:51.093 "num_base_bdevs": 3, 00:16:51.093 "num_base_bdevs_discovered": 3, 00:16:51.093 "num_base_bdevs_operational": 3, 00:16:51.093 "base_bdevs_list": [ 00:16:51.093 { 00:16:51.093 "name": "pt1", 00:16:51.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.093 "is_configured": true, 00:16:51.093 "data_offset": 2048, 00:16:51.093 "data_size": 63488 00:16:51.093 }, 00:16:51.093 { 00:16:51.093 "name": "pt2", 00:16:51.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.093 "is_configured": true, 00:16:51.093 "data_offset": 2048, 00:16:51.093 "data_size": 63488 00:16:51.093 }, 00:16:51.093 { 00:16:51.093 "name": "pt3", 00:16:51.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.093 "is_configured": true, 00:16:51.093 "data_offset": 2048, 00:16:51.093 "data_size": 63488 00:16:51.093 } 00:16:51.093 ] 00:16:51.093 } 00:16:51.093 } 00:16:51.093 }' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:51.093 pt2 00:16:51.093 pt3' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.093 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.352 [2024-10-11 09:50:35.729754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51abdc95-ad35-40bd-8d2e-98a86235826f 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51abdc95-ad35-40bd-8d2e-98a86235826f ']' 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.352 [2024-10-11 09:50:35.777440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.352 [2024-10-11 09:50:35.777532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.352 [2024-10-11 09:50:35.777691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.352 [2024-10-11 09:50:35.777790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.352 [2024-10-11 09:50:35.777802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.352 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.353 [2024-10-11 09:50:35.929292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:51.353 [2024-10-11 09:50:35.931206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:51.353 [2024-10-11 09:50:35.931317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:51.353 [2024-10-11 09:50:35.931393] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:51.353 [2024-10-11 09:50:35.931492] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:51.353 [2024-10-11 09:50:35.931548] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:51.353 [2024-10-11 09:50:35.931588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.353 [2024-10-11 09:50:35.931598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:51.353 request: 00:16:51.353 { 00:16:51.353 "name": "raid_bdev1", 00:16:51.353 "raid_level": "raid5f", 00:16:51.353 "base_bdevs": [ 00:16:51.353 "malloc1", 00:16:51.353 "malloc2", 00:16:51.353 "malloc3" 00:16:51.353 ], 00:16:51.353 "strip_size_kb": 64, 00:16:51.353 "superblock": false, 00:16:51.353 "method": "bdev_raid_create", 00:16:51.353 "req_id": 1 00:16:51.353 } 00:16:51.353 Got JSON-RPC error response 00:16:51.353 response: 00:16:51.353 { 00:16:51.353 "code": -17, 00:16:51.353 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:51.353 } 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:51.353 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.612 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:51.612 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:51.612 09:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.612 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.612 09:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.612 [2024-10-11 09:50:35.997056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.612 [2024-10-11 09:50:35.997163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.612 [2024-10-11 09:50:35.997200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:51.612 [2024-10-11 09:50:35.997231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.612 [2024-10-11 09:50:35.999549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.612 [2024-10-11 09:50:35.999637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.612 [2024-10-11 09:50:35.999775] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.612 [2024-10-11 09:50:35.999879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.612 pt1 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.612 "name": "raid_bdev1", 00:16:51.612 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:51.612 "strip_size_kb": 64, 00:16:51.612 "state": "configuring", 00:16:51.612 "raid_level": "raid5f", 00:16:51.612 "superblock": true, 00:16:51.612 "num_base_bdevs": 3, 00:16:51.612 "num_base_bdevs_discovered": 1, 00:16:51.612 "num_base_bdevs_operational": 3, 00:16:51.612 "base_bdevs_list": [ 00:16:51.612 { 00:16:51.612 "name": "pt1", 00:16:51.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.612 "is_configured": true, 00:16:51.612 "data_offset": 2048, 00:16:51.612 "data_size": 63488 00:16:51.612 }, 00:16:51.612 { 00:16:51.612 "name": null, 00:16:51.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.612 "is_configured": false, 00:16:51.612 "data_offset": 2048, 00:16:51.612 "data_size": 63488 00:16:51.612 }, 00:16:51.612 { 00:16:51.612 "name": null, 00:16:51.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.612 "is_configured": false, 00:16:51.612 "data_offset": 2048, 00:16:51.612 "data_size": 63488 00:16:51.612 } 00:16:51.612 ] 00:16:51.612 }' 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.612 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.872 [2024-10-11 09:50:36.448305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.872 [2024-10-11 09:50:36.448416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.872 [2024-10-11 09:50:36.448458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:51.872 [2024-10-11 09:50:36.448512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.872 [2024-10-11 09:50:36.449057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.872 [2024-10-11 09:50:36.449114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.872 [2024-10-11 09:50:36.449233] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.872 [2024-10-11 09:50:36.449284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.872 pt2 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.872 [2024-10-11 09:50:36.460283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.872 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.132 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.132 "name": "raid_bdev1", 00:16:52.132 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:52.132 "strip_size_kb": 64, 00:16:52.132 "state": "configuring", 00:16:52.132 "raid_level": "raid5f", 00:16:52.132 "superblock": true, 00:16:52.132 "num_base_bdevs": 3, 00:16:52.132 "num_base_bdevs_discovered": 1, 00:16:52.132 "num_base_bdevs_operational": 3, 00:16:52.132 "base_bdevs_list": [ 00:16:52.132 { 00:16:52.132 "name": "pt1", 00:16:52.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.132 "is_configured": true, 00:16:52.132 "data_offset": 2048, 00:16:52.132 "data_size": 63488 00:16:52.132 }, 00:16:52.132 { 00:16:52.132 "name": null, 00:16:52.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.132 "is_configured": false, 00:16:52.132 "data_offset": 0, 00:16:52.132 "data_size": 63488 00:16:52.132 }, 00:16:52.132 { 00:16:52.132 "name": null, 00:16:52.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.132 "is_configured": false, 00:16:52.132 "data_offset": 2048, 00:16:52.132 "data_size": 63488 00:16:52.132 } 00:16:52.132 ] 00:16:52.132 }' 00:16:52.132 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.132 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.391 [2024-10-11 09:50:36.943471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.391 [2024-10-11 09:50:36.943618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.391 [2024-10-11 09:50:36.943659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:52.391 [2024-10-11 09:50:36.943702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.391 [2024-10-11 09:50:36.944244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.391 [2024-10-11 09:50:36.944320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.391 [2024-10-11 09:50:36.944422] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:52.391 [2024-10-11 09:50:36.944450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.391 pt2 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.391 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.391 [2024-10-11 09:50:36.955444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.391 [2024-10-11 09:50:36.955544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.391 [2024-10-11 09:50:36.955575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:52.391 [2024-10-11 09:50:36.955603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.391 [2024-10-11 09:50:36.956103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.391 [2024-10-11 09:50:36.956176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.391 [2024-10-11 09:50:36.956288] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:52.392 [2024-10-11 09:50:36.956343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.392 [2024-10-11 09:50:36.956509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:52.392 [2024-10-11 09:50:36.956561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:52.392 [2024-10-11 09:50:36.956863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:52.392 [2024-10-11 09:50:36.962635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:52.392 [2024-10-11 09:50:36.962689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:52.392 [2024-10-11 09:50:36.962964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.392 pt3 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.392 09:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.392 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.392 "name": "raid_bdev1", 00:16:52.392 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:52.392 "strip_size_kb": 64, 00:16:52.392 "state": "online", 00:16:52.392 "raid_level": "raid5f", 00:16:52.392 "superblock": true, 00:16:52.392 "num_base_bdevs": 3, 00:16:52.392 "num_base_bdevs_discovered": 3, 00:16:52.392 "num_base_bdevs_operational": 3, 00:16:52.392 "base_bdevs_list": [ 00:16:52.392 { 00:16:52.392 "name": "pt1", 00:16:52.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.392 "is_configured": true, 00:16:52.392 "data_offset": 2048, 00:16:52.392 "data_size": 63488 00:16:52.392 }, 00:16:52.392 { 00:16:52.392 "name": "pt2", 00:16:52.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.392 "is_configured": true, 00:16:52.392 "data_offset": 2048, 00:16:52.392 "data_size": 63488 00:16:52.392 }, 00:16:52.392 { 00:16:52.392 "name": "pt3", 00:16:52.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.392 "is_configured": true, 00:16:52.392 "data_offset": 2048, 00:16:52.392 "data_size": 63488 00:16:52.392 } 00:16:52.392 ] 00:16:52.392 }' 00:16:52.392 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.392 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.961 [2024-10-11 09:50:37.404371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.961 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.961 "name": "raid_bdev1", 00:16:52.961 "aliases": [ 00:16:52.961 "51abdc95-ad35-40bd-8d2e-98a86235826f" 00:16:52.961 ], 00:16:52.961 "product_name": "Raid Volume", 00:16:52.961 "block_size": 512, 00:16:52.961 "num_blocks": 126976, 00:16:52.961 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:52.961 "assigned_rate_limits": { 00:16:52.961 "rw_ios_per_sec": 0, 00:16:52.961 "rw_mbytes_per_sec": 0, 00:16:52.961 "r_mbytes_per_sec": 0, 00:16:52.961 "w_mbytes_per_sec": 0 00:16:52.961 }, 00:16:52.961 "claimed": false, 00:16:52.961 "zoned": false, 00:16:52.961 "supported_io_types": { 00:16:52.961 "read": true, 00:16:52.962 "write": true, 00:16:52.962 "unmap": false, 00:16:52.962 "flush": false, 00:16:52.962 "reset": true, 00:16:52.962 "nvme_admin": false, 00:16:52.962 "nvme_io": false, 00:16:52.962 "nvme_io_md": false, 00:16:52.962 "write_zeroes": true, 00:16:52.962 "zcopy": false, 00:16:52.962 "get_zone_info": false, 00:16:52.962 "zone_management": false, 00:16:52.962 "zone_append": false, 00:16:52.962 "compare": false, 00:16:52.962 "compare_and_write": false, 00:16:52.962 "abort": false, 00:16:52.962 "seek_hole": false, 00:16:52.962 "seek_data": false, 00:16:52.962 "copy": false, 00:16:52.962 "nvme_iov_md": false 00:16:52.962 }, 00:16:52.962 "driver_specific": { 00:16:52.962 "raid": { 00:16:52.962 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:52.962 "strip_size_kb": 64, 00:16:52.962 "state": "online", 00:16:52.962 "raid_level": "raid5f", 00:16:52.962 "superblock": true, 00:16:52.962 "num_base_bdevs": 3, 00:16:52.962 "num_base_bdevs_discovered": 3, 00:16:52.962 "num_base_bdevs_operational": 3, 00:16:52.962 "base_bdevs_list": [ 00:16:52.962 { 00:16:52.962 "name": "pt1", 00:16:52.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.962 "is_configured": true, 00:16:52.962 "data_offset": 2048, 00:16:52.962 "data_size": 63488 00:16:52.962 }, 00:16:52.962 { 00:16:52.962 "name": "pt2", 00:16:52.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.962 "is_configured": true, 00:16:52.962 "data_offset": 2048, 00:16:52.962 "data_size": 63488 00:16:52.962 }, 00:16:52.962 { 00:16:52.962 "name": "pt3", 00:16:52.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.962 "is_configured": true, 00:16:52.962 "data_offset": 2048, 00:16:52.962 "data_size": 63488 00:16:52.962 } 00:16:52.962 ] 00:16:52.962 } 00:16:52.962 } 00:16:52.962 }' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:52.962 pt2 00:16:52.962 pt3' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.962 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.220 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:53.221 [2024-10-11 09:50:37.663930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51abdc95-ad35-40bd-8d2e-98a86235826f '!=' 51abdc95-ad35-40bd-8d2e-98a86235826f ']' 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.221 [2024-10-11 09:50:37.711650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.221 "name": "raid_bdev1", 00:16:53.221 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:53.221 "strip_size_kb": 64, 00:16:53.221 "state": "online", 00:16:53.221 "raid_level": "raid5f", 00:16:53.221 "superblock": true, 00:16:53.221 "num_base_bdevs": 3, 00:16:53.221 "num_base_bdevs_discovered": 2, 00:16:53.221 "num_base_bdevs_operational": 2, 00:16:53.221 "base_bdevs_list": [ 00:16:53.221 { 00:16:53.221 "name": null, 00:16:53.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.221 "is_configured": false, 00:16:53.221 "data_offset": 0, 00:16:53.221 "data_size": 63488 00:16:53.221 }, 00:16:53.221 { 00:16:53.221 "name": "pt2", 00:16:53.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.221 "is_configured": true, 00:16:53.221 "data_offset": 2048, 00:16:53.221 "data_size": 63488 00:16:53.221 }, 00:16:53.221 { 00:16:53.221 "name": "pt3", 00:16:53.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.221 "is_configured": true, 00:16:53.221 "data_offset": 2048, 00:16:53.221 "data_size": 63488 00:16:53.221 } 00:16:53.221 ] 00:16:53.221 }' 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.221 09:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 [2024-10-11 09:50:38.138903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.787 [2024-10-11 09:50:38.138976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.787 [2024-10-11 09:50:38.139084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.787 [2024-10-11 09:50:38.139179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.787 [2024-10-11 09:50:38.139235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.787 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.788 [2024-10-11 09:50:38.226682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.788 [2024-10-11 09:50:38.226803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.788 [2024-10-11 09:50:38.226838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:53.788 [2024-10-11 09:50:38.226879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.788 [2024-10-11 09:50:38.229111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.788 [2024-10-11 09:50:38.229182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.788 [2024-10-11 09:50:38.229282] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:53.788 [2024-10-11 09:50:38.229371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.788 pt2 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.788 "name": "raid_bdev1", 00:16:53.788 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:53.788 "strip_size_kb": 64, 00:16:53.788 "state": "configuring", 00:16:53.788 "raid_level": "raid5f", 00:16:53.788 "superblock": true, 00:16:53.788 "num_base_bdevs": 3, 00:16:53.788 "num_base_bdevs_discovered": 1, 00:16:53.788 "num_base_bdevs_operational": 2, 00:16:53.788 "base_bdevs_list": [ 00:16:53.788 { 00:16:53.788 "name": null, 00:16:53.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.788 "is_configured": false, 00:16:53.788 "data_offset": 2048, 00:16:53.788 "data_size": 63488 00:16:53.788 }, 00:16:53.788 { 00:16:53.788 "name": "pt2", 00:16:53.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.788 "is_configured": true, 00:16:53.788 "data_offset": 2048, 00:16:53.788 "data_size": 63488 00:16:53.788 }, 00:16:53.788 { 00:16:53.788 "name": null, 00:16:53.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.788 "is_configured": false, 00:16:53.788 "data_offset": 2048, 00:16:53.788 "data_size": 63488 00:16:53.788 } 00:16:53.788 ] 00:16:53.788 }' 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.788 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.360 [2024-10-11 09:50:38.713881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:54.360 [2024-10-11 09:50:38.713991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.360 [2024-10-11 09:50:38.714033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:54.360 [2024-10-11 09:50:38.714064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.360 [2024-10-11 09:50:38.714599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.360 [2024-10-11 09:50:38.714660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:54.360 [2024-10-11 09:50:38.714783] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:54.360 [2024-10-11 09:50:38.714850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:54.360 [2024-10-11 09:50:38.715014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:54.360 [2024-10-11 09:50:38.715055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:54.360 [2024-10-11 09:50:38.715333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:54.360 [2024-10-11 09:50:38.721192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:54.360 [2024-10-11 09:50:38.721213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:54.360 [2024-10-11 09:50:38.721504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.360 pt3 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.360 "name": "raid_bdev1", 00:16:54.360 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:54.360 "strip_size_kb": 64, 00:16:54.360 "state": "online", 00:16:54.360 "raid_level": "raid5f", 00:16:54.360 "superblock": true, 00:16:54.360 "num_base_bdevs": 3, 00:16:54.360 "num_base_bdevs_discovered": 2, 00:16:54.360 "num_base_bdevs_operational": 2, 00:16:54.360 "base_bdevs_list": [ 00:16:54.360 { 00:16:54.360 "name": null, 00:16:54.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.360 "is_configured": false, 00:16:54.360 "data_offset": 2048, 00:16:54.360 "data_size": 63488 00:16:54.360 }, 00:16:54.360 { 00:16:54.360 "name": "pt2", 00:16:54.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.360 "is_configured": true, 00:16:54.360 "data_offset": 2048, 00:16:54.360 "data_size": 63488 00:16:54.360 }, 00:16:54.360 { 00:16:54.360 "name": "pt3", 00:16:54.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.360 "is_configured": true, 00:16:54.360 "data_offset": 2048, 00:16:54.360 "data_size": 63488 00:16:54.360 } 00:16:54.360 ] 00:16:54.360 }' 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.360 09:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.620 [2024-10-11 09:50:39.171774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.620 [2024-10-11 09:50:39.171852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.620 [2024-10-11 09:50:39.171985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.620 [2024-10-11 09:50:39.172094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.620 [2024-10-11 09:50:39.172150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:54.620 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.621 [2024-10-11 09:50:39.227664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.621 [2024-10-11 09:50:39.227812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.621 [2024-10-11 09:50:39.227859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:54.621 [2024-10-11 09:50:39.227894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.621 [2024-10-11 09:50:39.230501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.621 [2024-10-11 09:50:39.230581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.621 [2024-10-11 09:50:39.230699] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:54.621 [2024-10-11 09:50:39.230806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.621 [2024-10-11 09:50:39.230998] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:54.621 [2024-10-11 09:50:39.231056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.621 [2024-10-11 09:50:39.231098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:54.621 [2024-10-11 09:50:39.231214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.621 pt1 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.621 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.881 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.881 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.881 "name": "raid_bdev1", 00:16:54.881 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:54.881 "strip_size_kb": 64, 00:16:54.881 "state": "configuring", 00:16:54.881 "raid_level": "raid5f", 00:16:54.881 "superblock": true, 00:16:54.881 "num_base_bdevs": 3, 00:16:54.881 "num_base_bdevs_discovered": 1, 00:16:54.881 "num_base_bdevs_operational": 2, 00:16:54.881 "base_bdevs_list": [ 00:16:54.881 { 00:16:54.881 "name": null, 00:16:54.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.881 "is_configured": false, 00:16:54.881 "data_offset": 2048, 00:16:54.881 "data_size": 63488 00:16:54.881 }, 00:16:54.881 { 00:16:54.881 "name": "pt2", 00:16:54.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.881 "is_configured": true, 00:16:54.881 "data_offset": 2048, 00:16:54.881 "data_size": 63488 00:16:54.881 }, 00:16:54.881 { 00:16:54.881 "name": null, 00:16:54.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.881 "is_configured": false, 00:16:54.881 "data_offset": 2048, 00:16:54.881 "data_size": 63488 00:16:54.881 } 00:16:54.881 ] 00:16:54.881 }' 00:16:54.881 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.881 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.141 [2024-10-11 09:50:39.730843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:55.141 [2024-10-11 09:50:39.730959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.141 [2024-10-11 09:50:39.731007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:55.141 [2024-10-11 09:50:39.731046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.141 [2024-10-11 09:50:39.731631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.141 [2024-10-11 09:50:39.731717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:55.141 [2024-10-11 09:50:39.731866] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:55.141 [2024-10-11 09:50:39.731929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:55.141 [2024-10-11 09:50:39.732114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:55.141 [2024-10-11 09:50:39.732158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:55.141 [2024-10-11 09:50:39.732492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:55.141 [2024-10-11 09:50:39.739656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:55.141 [2024-10-11 09:50:39.739745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:55.141 [2024-10-11 09:50:39.740064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.141 pt3 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.141 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.142 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.401 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.401 "name": "raid_bdev1", 00:16:55.401 "uuid": "51abdc95-ad35-40bd-8d2e-98a86235826f", 00:16:55.401 "strip_size_kb": 64, 00:16:55.401 "state": "online", 00:16:55.401 "raid_level": "raid5f", 00:16:55.401 "superblock": true, 00:16:55.401 "num_base_bdevs": 3, 00:16:55.401 "num_base_bdevs_discovered": 2, 00:16:55.401 "num_base_bdevs_operational": 2, 00:16:55.401 "base_bdevs_list": [ 00:16:55.401 { 00:16:55.401 "name": null, 00:16:55.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.401 "is_configured": false, 00:16:55.401 "data_offset": 2048, 00:16:55.401 "data_size": 63488 00:16:55.401 }, 00:16:55.401 { 00:16:55.401 "name": "pt2", 00:16:55.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.401 "is_configured": true, 00:16:55.401 "data_offset": 2048, 00:16:55.401 "data_size": 63488 00:16:55.401 }, 00:16:55.401 { 00:16:55.401 "name": "pt3", 00:16:55.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.401 "is_configured": true, 00:16:55.401 "data_offset": 2048, 00:16:55.401 "data_size": 63488 00:16:55.401 } 00:16:55.401 ] 00:16:55.401 }' 00:16:55.401 09:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.401 09:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 [2024-10-11 09:50:40.174006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 51abdc95-ad35-40bd-8d2e-98a86235826f '!=' 51abdc95-ad35-40bd-8d2e-98a86235826f ']' 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81709 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81709 ']' 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81709 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81709 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81709' 00:16:55.661 killing process with pid 81709 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81709 00:16:55.661 [2024-10-11 09:50:40.224094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.661 [2024-10-11 09:50:40.224216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.661 09:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81709 00:16:55.661 [2024-10-11 09:50:40.224292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.661 [2024-10-11 09:50:40.224305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:55.921 [2024-10-11 09:50:40.516075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.301 09:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:57.301 00:16:57.301 real 0m7.762s 00:16:57.301 user 0m12.132s 00:16:57.301 sys 0m1.393s 00:16:57.301 09:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.301 09:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.301 ************************************ 00:16:57.301 END TEST raid5f_superblock_test 00:16:57.301 ************************************ 00:16:57.301 09:50:41 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:57.301 09:50:41 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:57.301 09:50:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:57.301 09:50:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.301 09:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.301 ************************************ 00:16:57.301 START TEST raid5f_rebuild_test 00:16:57.301 ************************************ 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82147 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82147 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 82147 ']' 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.301 09:50:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.301 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:57.301 Zero copy mechanism will not be used. 00:16:57.301 [2024-10-11 09:50:41.783354] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:16:57.301 [2024-10-11 09:50:41.783472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82147 ] 00:16:57.561 [2024-10-11 09:50:41.948825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.561 [2024-10-11 09:50:42.073838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.820 [2024-10-11 09:50:42.292218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.820 [2024-10-11 09:50:42.292283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.079 BaseBdev1_malloc 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.079 [2024-10-11 09:50:42.675535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.079 [2024-10-11 09:50:42.675660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.079 [2024-10-11 09:50:42.675786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.079 [2024-10-11 09:50:42.675837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.079 [2024-10-11 09:50:42.678169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.079 [2024-10-11 09:50:42.678251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.079 BaseBdev1 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.079 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 BaseBdev2_malloc 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 [2024-10-11 09:50:42.734205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:58.340 [2024-10-11 09:50:42.734269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.340 [2024-10-11 09:50:42.734288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.340 [2024-10-11 09:50:42.734299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.340 [2024-10-11 09:50:42.736379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.340 [2024-10-11 09:50:42.736419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:58.340 BaseBdev2 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 BaseBdev3_malloc 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 [2024-10-11 09:50:42.806573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:58.340 [2024-10-11 09:50:42.806683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.340 [2024-10-11 09:50:42.806727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:58.340 [2024-10-11 09:50:42.806775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.340 [2024-10-11 09:50:42.808909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.340 [2024-10-11 09:50:42.808988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:58.340 BaseBdev3 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 spare_malloc 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 spare_delay 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 [2024-10-11 09:50:42.874813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:58.340 [2024-10-11 09:50:42.874909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.340 [2024-10-11 09:50:42.874963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:58.340 [2024-10-11 09:50:42.874980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.340 [2024-10-11 09:50:42.877057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.340 [2024-10-11 09:50:42.877096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:58.340 spare 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 [2024-10-11 09:50:42.886853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.340 [2024-10-11 09:50:42.888678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.340 [2024-10-11 09:50:42.888820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.340 [2024-10-11 09:50:42.888958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:58.340 [2024-10-11 09:50:42.889012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:58.340 [2024-10-11 09:50:42.889289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:58.340 [2024-10-11 09:50:42.895115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:58.340 [2024-10-11 09:50:42.895188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:58.340 [2024-10-11 09:50:42.895442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.340 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.340 "name": "raid_bdev1", 00:16:58.341 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:16:58.341 "strip_size_kb": 64, 00:16:58.341 "state": "online", 00:16:58.341 "raid_level": "raid5f", 00:16:58.341 "superblock": false, 00:16:58.341 "num_base_bdevs": 3, 00:16:58.341 "num_base_bdevs_discovered": 3, 00:16:58.341 "num_base_bdevs_operational": 3, 00:16:58.341 "base_bdevs_list": [ 00:16:58.341 { 00:16:58.341 "name": "BaseBdev1", 00:16:58.341 "uuid": "46c528c6-00dd-592d-9e39-db44dde57922", 00:16:58.341 "is_configured": true, 00:16:58.341 "data_offset": 0, 00:16:58.341 "data_size": 65536 00:16:58.341 }, 00:16:58.341 { 00:16:58.341 "name": "BaseBdev2", 00:16:58.341 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:16:58.341 "is_configured": true, 00:16:58.341 "data_offset": 0, 00:16:58.341 "data_size": 65536 00:16:58.341 }, 00:16:58.341 { 00:16:58.341 "name": "BaseBdev3", 00:16:58.341 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:16:58.341 "is_configured": true, 00:16:58.341 "data_offset": 0, 00:16:58.341 "data_size": 65536 00:16:58.341 } 00:16:58.341 ] 00:16:58.341 }' 00:16:58.341 09:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.341 09:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.909 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.910 [2024-10-11 09:50:43.392493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.910 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:59.170 [2024-10-11 09:50:43.687937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:59.170 /dev/nbd0 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.170 1+0 records in 00:16:59.170 1+0 records out 00:16:59.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387197 s, 10.6 MB/s 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:59.170 09:50:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:59.755 512+0 records in 00:16:59.755 512+0 records out 00:16:59.755 67108864 bytes (67 MB, 64 MiB) copied, 0.389087 s, 172 MB/s 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.755 [2024-10-11 09:50:44.371468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.755 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.014 [2024-10-11 09:50:44.390120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.014 "name": "raid_bdev1", 00:17:00.014 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:00.014 "strip_size_kb": 64, 00:17:00.014 "state": "online", 00:17:00.014 "raid_level": "raid5f", 00:17:00.014 "superblock": false, 00:17:00.014 "num_base_bdevs": 3, 00:17:00.014 "num_base_bdevs_discovered": 2, 00:17:00.014 "num_base_bdevs_operational": 2, 00:17:00.014 "base_bdevs_list": [ 00:17:00.014 { 00:17:00.014 "name": null, 00:17:00.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.014 "is_configured": false, 00:17:00.014 "data_offset": 0, 00:17:00.014 "data_size": 65536 00:17:00.014 }, 00:17:00.014 { 00:17:00.014 "name": "BaseBdev2", 00:17:00.014 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:00.014 "is_configured": true, 00:17:00.014 "data_offset": 0, 00:17:00.014 "data_size": 65536 00:17:00.014 }, 00:17:00.014 { 00:17:00.014 "name": "BaseBdev3", 00:17:00.014 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:00.014 "is_configured": true, 00:17:00.014 "data_offset": 0, 00:17:00.014 "data_size": 65536 00:17:00.014 } 00:17:00.014 ] 00:17:00.014 }' 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.014 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.274 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.274 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.274 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.274 [2024-10-11 09:50:44.833365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.274 [2024-10-11 09:50:44.851262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:00.274 09:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.274 09:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:00.274 [2024-10-11 09:50:44.860278] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.652 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.653 "name": "raid_bdev1", 00:17:01.653 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:01.653 "strip_size_kb": 64, 00:17:01.653 "state": "online", 00:17:01.653 "raid_level": "raid5f", 00:17:01.653 "superblock": false, 00:17:01.653 "num_base_bdevs": 3, 00:17:01.653 "num_base_bdevs_discovered": 3, 00:17:01.653 "num_base_bdevs_operational": 3, 00:17:01.653 "process": { 00:17:01.653 "type": "rebuild", 00:17:01.653 "target": "spare", 00:17:01.653 "progress": { 00:17:01.653 "blocks": 18432, 00:17:01.653 "percent": 14 00:17:01.653 } 00:17:01.653 }, 00:17:01.653 "base_bdevs_list": [ 00:17:01.653 { 00:17:01.653 "name": "spare", 00:17:01.653 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:01.653 "is_configured": true, 00:17:01.653 "data_offset": 0, 00:17:01.653 "data_size": 65536 00:17:01.653 }, 00:17:01.653 { 00:17:01.653 "name": "BaseBdev2", 00:17:01.653 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:01.653 "is_configured": true, 00:17:01.653 "data_offset": 0, 00:17:01.653 "data_size": 65536 00:17:01.653 }, 00:17:01.653 { 00:17:01.653 "name": "BaseBdev3", 00:17:01.653 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:01.653 "is_configured": true, 00:17:01.653 "data_offset": 0, 00:17:01.653 "data_size": 65536 00:17:01.653 } 00:17:01.653 ] 00:17:01.653 }' 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.653 09:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.653 [2024-10-11 09:50:46.011673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.653 [2024-10-11 09:50:46.069996] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.653 [2024-10-11 09:50:46.070160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.653 [2024-10-11 09:50:46.070216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.653 [2024-10-11 09:50:46.070261] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.653 "name": "raid_bdev1", 00:17:01.653 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:01.653 "strip_size_kb": 64, 00:17:01.653 "state": "online", 00:17:01.653 "raid_level": "raid5f", 00:17:01.653 "superblock": false, 00:17:01.653 "num_base_bdevs": 3, 00:17:01.653 "num_base_bdevs_discovered": 2, 00:17:01.653 "num_base_bdevs_operational": 2, 00:17:01.653 "base_bdevs_list": [ 00:17:01.653 { 00:17:01.653 "name": null, 00:17:01.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.653 "is_configured": false, 00:17:01.653 "data_offset": 0, 00:17:01.653 "data_size": 65536 00:17:01.653 }, 00:17:01.653 { 00:17:01.653 "name": "BaseBdev2", 00:17:01.653 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:01.653 "is_configured": true, 00:17:01.653 "data_offset": 0, 00:17:01.653 "data_size": 65536 00:17:01.653 }, 00:17:01.653 { 00:17:01.653 "name": "BaseBdev3", 00:17:01.653 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:01.653 "is_configured": true, 00:17:01.653 "data_offset": 0, 00:17:01.653 "data_size": 65536 00:17:01.653 } 00:17:01.653 ] 00:17:01.653 }' 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.653 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.912 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.912 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.912 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.913 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.172 "name": "raid_bdev1", 00:17:02.172 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:02.172 "strip_size_kb": 64, 00:17:02.172 "state": "online", 00:17:02.172 "raid_level": "raid5f", 00:17:02.172 "superblock": false, 00:17:02.172 "num_base_bdevs": 3, 00:17:02.172 "num_base_bdevs_discovered": 2, 00:17:02.172 "num_base_bdevs_operational": 2, 00:17:02.172 "base_bdevs_list": [ 00:17:02.172 { 00:17:02.172 "name": null, 00:17:02.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.172 "is_configured": false, 00:17:02.172 "data_offset": 0, 00:17:02.172 "data_size": 65536 00:17:02.172 }, 00:17:02.172 { 00:17:02.172 "name": "BaseBdev2", 00:17:02.172 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:02.172 "is_configured": true, 00:17:02.172 "data_offset": 0, 00:17:02.172 "data_size": 65536 00:17:02.172 }, 00:17:02.172 { 00:17:02.172 "name": "BaseBdev3", 00:17:02.172 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:02.172 "is_configured": true, 00:17:02.172 "data_offset": 0, 00:17:02.172 "data_size": 65536 00:17:02.172 } 00:17:02.172 ] 00:17:02.172 }' 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.172 [2024-10-11 09:50:46.681962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.172 [2024-10-11 09:50:46.698768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.172 09:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:02.172 [2024-10-11 09:50:46.706573] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.111 09:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.370 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.371 "name": "raid_bdev1", 00:17:03.371 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:03.371 "strip_size_kb": 64, 00:17:03.371 "state": "online", 00:17:03.371 "raid_level": "raid5f", 00:17:03.371 "superblock": false, 00:17:03.371 "num_base_bdevs": 3, 00:17:03.371 "num_base_bdevs_discovered": 3, 00:17:03.371 "num_base_bdevs_operational": 3, 00:17:03.371 "process": { 00:17:03.371 "type": "rebuild", 00:17:03.371 "target": "spare", 00:17:03.371 "progress": { 00:17:03.371 "blocks": 20480, 00:17:03.371 "percent": 15 00:17:03.371 } 00:17:03.371 }, 00:17:03.371 "base_bdevs_list": [ 00:17:03.371 { 00:17:03.371 "name": "spare", 00:17:03.371 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:03.371 "is_configured": true, 00:17:03.371 "data_offset": 0, 00:17:03.371 "data_size": 65536 00:17:03.371 }, 00:17:03.371 { 00:17:03.371 "name": "BaseBdev2", 00:17:03.371 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:03.371 "is_configured": true, 00:17:03.371 "data_offset": 0, 00:17:03.371 "data_size": 65536 00:17:03.371 }, 00:17:03.371 { 00:17:03.371 "name": "BaseBdev3", 00:17:03.371 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:03.371 "is_configured": true, 00:17:03.371 "data_offset": 0, 00:17:03.371 "data_size": 65536 00:17:03.371 } 00:17:03.371 ] 00:17:03.371 }' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=563 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.371 "name": "raid_bdev1", 00:17:03.371 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:03.371 "strip_size_kb": 64, 00:17:03.371 "state": "online", 00:17:03.371 "raid_level": "raid5f", 00:17:03.371 "superblock": false, 00:17:03.371 "num_base_bdevs": 3, 00:17:03.371 "num_base_bdevs_discovered": 3, 00:17:03.371 "num_base_bdevs_operational": 3, 00:17:03.371 "process": { 00:17:03.371 "type": "rebuild", 00:17:03.371 "target": "spare", 00:17:03.371 "progress": { 00:17:03.371 "blocks": 22528, 00:17:03.371 "percent": 17 00:17:03.371 } 00:17:03.371 }, 00:17:03.371 "base_bdevs_list": [ 00:17:03.371 { 00:17:03.371 "name": "spare", 00:17:03.371 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:03.371 "is_configured": true, 00:17:03.371 "data_offset": 0, 00:17:03.371 "data_size": 65536 00:17:03.371 }, 00:17:03.371 { 00:17:03.371 "name": "BaseBdev2", 00:17:03.371 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:03.371 "is_configured": true, 00:17:03.371 "data_offset": 0, 00:17:03.371 "data_size": 65536 00:17:03.371 }, 00:17:03.371 { 00:17:03.371 "name": "BaseBdev3", 00:17:03.371 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:03.371 "is_configured": true, 00:17:03.371 "data_offset": 0, 00:17:03.371 "data_size": 65536 00:17:03.371 } 00:17:03.371 ] 00:17:03.371 }' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.371 09:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.631 09:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.631 09:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.568 "name": "raid_bdev1", 00:17:04.568 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:04.568 "strip_size_kb": 64, 00:17:04.568 "state": "online", 00:17:04.568 "raid_level": "raid5f", 00:17:04.568 "superblock": false, 00:17:04.568 "num_base_bdevs": 3, 00:17:04.568 "num_base_bdevs_discovered": 3, 00:17:04.568 "num_base_bdevs_operational": 3, 00:17:04.568 "process": { 00:17:04.568 "type": "rebuild", 00:17:04.568 "target": "spare", 00:17:04.568 "progress": { 00:17:04.568 "blocks": 47104, 00:17:04.568 "percent": 35 00:17:04.568 } 00:17:04.568 }, 00:17:04.568 "base_bdevs_list": [ 00:17:04.568 { 00:17:04.568 "name": "spare", 00:17:04.568 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:04.568 "is_configured": true, 00:17:04.568 "data_offset": 0, 00:17:04.568 "data_size": 65536 00:17:04.568 }, 00:17:04.568 { 00:17:04.568 "name": "BaseBdev2", 00:17:04.568 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:04.568 "is_configured": true, 00:17:04.568 "data_offset": 0, 00:17:04.568 "data_size": 65536 00:17:04.568 }, 00:17:04.568 { 00:17:04.568 "name": "BaseBdev3", 00:17:04.568 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:04.568 "is_configured": true, 00:17:04.568 "data_offset": 0, 00:17:04.568 "data_size": 65536 00:17:04.568 } 00:17:04.568 ] 00:17:04.568 }' 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.568 09:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.947 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.947 "name": "raid_bdev1", 00:17:05.947 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:05.947 "strip_size_kb": 64, 00:17:05.947 "state": "online", 00:17:05.947 "raid_level": "raid5f", 00:17:05.947 "superblock": false, 00:17:05.947 "num_base_bdevs": 3, 00:17:05.947 "num_base_bdevs_discovered": 3, 00:17:05.948 "num_base_bdevs_operational": 3, 00:17:05.948 "process": { 00:17:05.948 "type": "rebuild", 00:17:05.948 "target": "spare", 00:17:05.948 "progress": { 00:17:05.948 "blocks": 69632, 00:17:05.948 "percent": 53 00:17:05.948 } 00:17:05.948 }, 00:17:05.948 "base_bdevs_list": [ 00:17:05.948 { 00:17:05.948 "name": "spare", 00:17:05.948 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:05.948 "is_configured": true, 00:17:05.948 "data_offset": 0, 00:17:05.948 "data_size": 65536 00:17:05.948 }, 00:17:05.948 { 00:17:05.948 "name": "BaseBdev2", 00:17:05.948 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:05.948 "is_configured": true, 00:17:05.948 "data_offset": 0, 00:17:05.948 "data_size": 65536 00:17:05.948 }, 00:17:05.948 { 00:17:05.948 "name": "BaseBdev3", 00:17:05.948 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:05.948 "is_configured": true, 00:17:05.948 "data_offset": 0, 00:17:05.948 "data_size": 65536 00:17:05.948 } 00:17:05.948 ] 00:17:05.948 }' 00:17:05.948 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.948 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.948 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.948 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.948 09:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.886 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.887 "name": "raid_bdev1", 00:17:06.887 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:06.887 "strip_size_kb": 64, 00:17:06.887 "state": "online", 00:17:06.887 "raid_level": "raid5f", 00:17:06.887 "superblock": false, 00:17:06.887 "num_base_bdevs": 3, 00:17:06.887 "num_base_bdevs_discovered": 3, 00:17:06.887 "num_base_bdevs_operational": 3, 00:17:06.887 "process": { 00:17:06.887 "type": "rebuild", 00:17:06.887 "target": "spare", 00:17:06.887 "progress": { 00:17:06.887 "blocks": 92160, 00:17:06.887 "percent": 70 00:17:06.887 } 00:17:06.887 }, 00:17:06.887 "base_bdevs_list": [ 00:17:06.887 { 00:17:06.887 "name": "spare", 00:17:06.887 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:06.887 "is_configured": true, 00:17:06.887 "data_offset": 0, 00:17:06.887 "data_size": 65536 00:17:06.887 }, 00:17:06.887 { 00:17:06.887 "name": "BaseBdev2", 00:17:06.887 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:06.887 "is_configured": true, 00:17:06.887 "data_offset": 0, 00:17:06.887 "data_size": 65536 00:17:06.887 }, 00:17:06.887 { 00:17:06.887 "name": "BaseBdev3", 00:17:06.887 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:06.887 "is_configured": true, 00:17:06.887 "data_offset": 0, 00:17:06.887 "data_size": 65536 00:17:06.887 } 00:17:06.887 ] 00:17:06.887 }' 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.887 09:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.265 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.265 "name": "raid_bdev1", 00:17:08.265 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:08.265 "strip_size_kb": 64, 00:17:08.265 "state": "online", 00:17:08.265 "raid_level": "raid5f", 00:17:08.265 "superblock": false, 00:17:08.265 "num_base_bdevs": 3, 00:17:08.266 "num_base_bdevs_discovered": 3, 00:17:08.266 "num_base_bdevs_operational": 3, 00:17:08.266 "process": { 00:17:08.266 "type": "rebuild", 00:17:08.266 "target": "spare", 00:17:08.266 "progress": { 00:17:08.266 "blocks": 116736, 00:17:08.266 "percent": 89 00:17:08.266 } 00:17:08.266 }, 00:17:08.266 "base_bdevs_list": [ 00:17:08.266 { 00:17:08.266 "name": "spare", 00:17:08.266 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:08.266 "is_configured": true, 00:17:08.266 "data_offset": 0, 00:17:08.266 "data_size": 65536 00:17:08.266 }, 00:17:08.266 { 00:17:08.266 "name": "BaseBdev2", 00:17:08.266 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:08.266 "is_configured": true, 00:17:08.266 "data_offset": 0, 00:17:08.266 "data_size": 65536 00:17:08.266 }, 00:17:08.266 { 00:17:08.266 "name": "BaseBdev3", 00:17:08.266 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:08.266 "is_configured": true, 00:17:08.266 "data_offset": 0, 00:17:08.266 "data_size": 65536 00:17:08.266 } 00:17:08.266 ] 00:17:08.266 }' 00:17:08.266 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.266 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.266 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.266 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.266 09:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.834 [2024-10-11 09:50:53.159252] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:08.834 [2024-10-11 09:50:53.159447] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:08.834 [2024-10-11 09:50:53.159500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.093 "name": "raid_bdev1", 00:17:09.093 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:09.093 "strip_size_kb": 64, 00:17:09.093 "state": "online", 00:17:09.093 "raid_level": "raid5f", 00:17:09.093 "superblock": false, 00:17:09.093 "num_base_bdevs": 3, 00:17:09.093 "num_base_bdevs_discovered": 3, 00:17:09.093 "num_base_bdevs_operational": 3, 00:17:09.093 "base_bdevs_list": [ 00:17:09.093 { 00:17:09.093 "name": "spare", 00:17:09.093 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:09.093 "is_configured": true, 00:17:09.093 "data_offset": 0, 00:17:09.093 "data_size": 65536 00:17:09.093 }, 00:17:09.093 { 00:17:09.093 "name": "BaseBdev2", 00:17:09.093 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:09.093 "is_configured": true, 00:17:09.093 "data_offset": 0, 00:17:09.093 "data_size": 65536 00:17:09.093 }, 00:17:09.093 { 00:17:09.093 "name": "BaseBdev3", 00:17:09.093 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:09.093 "is_configured": true, 00:17:09.093 "data_offset": 0, 00:17:09.093 "data_size": 65536 00:17:09.093 } 00:17:09.093 ] 00:17:09.093 }' 00:17:09.093 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.353 "name": "raid_bdev1", 00:17:09.353 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:09.353 "strip_size_kb": 64, 00:17:09.353 "state": "online", 00:17:09.353 "raid_level": "raid5f", 00:17:09.353 "superblock": false, 00:17:09.353 "num_base_bdevs": 3, 00:17:09.353 "num_base_bdevs_discovered": 3, 00:17:09.353 "num_base_bdevs_operational": 3, 00:17:09.353 "base_bdevs_list": [ 00:17:09.353 { 00:17:09.353 "name": "spare", 00:17:09.353 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:09.353 "is_configured": true, 00:17:09.353 "data_offset": 0, 00:17:09.353 "data_size": 65536 00:17:09.353 }, 00:17:09.353 { 00:17:09.353 "name": "BaseBdev2", 00:17:09.353 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:09.353 "is_configured": true, 00:17:09.353 "data_offset": 0, 00:17:09.353 "data_size": 65536 00:17:09.353 }, 00:17:09.353 { 00:17:09.353 "name": "BaseBdev3", 00:17:09.353 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:09.353 "is_configured": true, 00:17:09.353 "data_offset": 0, 00:17:09.353 "data_size": 65536 00:17:09.353 } 00:17:09.353 ] 00:17:09.353 }' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.353 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.613 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.613 "name": "raid_bdev1", 00:17:09.613 "uuid": "1111cd18-5cd0-41b9-a184-a3b49d993993", 00:17:09.613 "strip_size_kb": 64, 00:17:09.613 "state": "online", 00:17:09.613 "raid_level": "raid5f", 00:17:09.613 "superblock": false, 00:17:09.613 "num_base_bdevs": 3, 00:17:09.613 "num_base_bdevs_discovered": 3, 00:17:09.613 "num_base_bdevs_operational": 3, 00:17:09.613 "base_bdevs_list": [ 00:17:09.613 { 00:17:09.613 "name": "spare", 00:17:09.613 "uuid": "943af5f4-aba0-5f47-9fbf-d7439c61f391", 00:17:09.613 "is_configured": true, 00:17:09.613 "data_offset": 0, 00:17:09.613 "data_size": 65536 00:17:09.613 }, 00:17:09.613 { 00:17:09.613 "name": "BaseBdev2", 00:17:09.613 "uuid": "d494e4b4-fe16-51b5-ae6b-da0a745a5c22", 00:17:09.613 "is_configured": true, 00:17:09.613 "data_offset": 0, 00:17:09.613 "data_size": 65536 00:17:09.613 }, 00:17:09.613 { 00:17:09.613 "name": "BaseBdev3", 00:17:09.613 "uuid": "04c4c9d6-cb92-54b4-90cd-0ce5b576b894", 00:17:09.613 "is_configured": true, 00:17:09.613 "data_offset": 0, 00:17:09.613 "data_size": 65536 00:17:09.613 } 00:17:09.613 ] 00:17:09.613 }' 00:17:09.613 09:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.613 09:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.873 [2024-10-11 09:50:54.372561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.873 [2024-10-11 09:50:54.372648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.873 [2024-10-11 09:50:54.372833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.873 [2024-10-11 09:50:54.372987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.873 [2024-10-11 09:50:54.373058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:09.873 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:10.133 /dev/nbd0 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.133 1+0 records in 00:17:10.133 1+0 records out 00:17:10.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432857 s, 9.5 MB/s 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:10.133 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:10.393 /dev/nbd1 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.393 1+0 records in 00:17:10.393 1+0 records out 00:17:10.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430637 s, 9.5 MB/s 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:10.393 09:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:10.652 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:10.652 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.652 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:10.652 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.652 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:10.653 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.653 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.912 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82147 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 82147 ']' 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 82147 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82147 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82147' 00:17:11.171 killing process with pid 82147 00:17:11.171 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 82147 00:17:11.171 Received shutdown signal, test time was about 60.000000 seconds 00:17:11.171 00:17:11.171 Latency(us) 00:17:11.171 [2024-10-11T09:50:55.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.171 [2024-10-11T09:50:55.803Z] =================================================================================================================== 00:17:11.172 [2024-10-11T09:50:55.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.172 [2024-10-11 09:50:55.658639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.172 09:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 82147 00:17:11.503 [2024-10-11 09:50:56.035718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:12.889 00:17:12.889 real 0m15.446s 00:17:12.889 user 0m19.071s 00:17:12.889 sys 0m2.034s 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.889 ************************************ 00:17:12.889 END TEST raid5f_rebuild_test 00:17:12.889 ************************************ 00:17:12.889 09:50:57 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:12.889 09:50:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:12.889 09:50:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.889 09:50:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.889 ************************************ 00:17:12.889 START TEST raid5f_rebuild_test_sb 00:17:12.889 ************************************ 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82591 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82591 00:17:12.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82591 ']' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.889 09:50:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:12.889 Zero copy mechanism will not be used. 00:17:12.889 [2024-10-11 09:50:57.307639] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:17:12.889 [2024-10-11 09:50:57.307792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82591 ] 00:17:12.889 [2024-10-11 09:50:57.472595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.149 [2024-10-11 09:50:57.595429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.408 [2024-10-11 09:50:57.817263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.408 [2024-10-11 09:50:57.817303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 BaseBdev1_malloc 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 [2024-10-11 09:50:58.201733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.668 [2024-10-11 09:50:58.201891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.668 [2024-10-11 09:50:58.201947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.668 [2024-10-11 09:50:58.201986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.668 [2024-10-11 09:50:58.204392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.668 [2024-10-11 09:50:58.204478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.668 BaseBdev1 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:13.668 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.669 BaseBdev2_malloc 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.669 [2024-10-11 09:50:58.259270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:13.669 [2024-10-11 09:50:58.259416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.669 [2024-10-11 09:50:58.259470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.669 [2024-10-11 09:50:58.259533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.669 [2024-10-11 09:50:58.261771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.669 [2024-10-11 09:50:58.261845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:13.669 BaseBdev2 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.669 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.929 BaseBdev3_malloc 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.929 [2024-10-11 09:50:58.336092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:13.929 [2024-10-11 09:50:58.336151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.929 [2024-10-11 09:50:58.336173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.929 [2024-10-11 09:50:58.336183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.929 [2024-10-11 09:50:58.338439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.929 [2024-10-11 09:50:58.338485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:13.929 BaseBdev3 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.929 spare_malloc 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.929 spare_delay 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.929 [2024-10-11 09:50:58.409925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.929 [2024-10-11 09:50:58.410042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.929 [2024-10-11 09:50:58.410091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:13.929 [2024-10-11 09:50:58.410149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.929 [2024-10-11 09:50:58.412625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.929 [2024-10-11 09:50:58.412714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.929 spare 00:17:13.929 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.930 [2024-10-11 09:50:58.422033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.930 [2024-10-11 09:50:58.424040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.930 [2024-10-11 09:50:58.424114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.930 [2024-10-11 09:50:58.424306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.930 [2024-10-11 09:50:58.424320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:13.930 [2024-10-11 09:50:58.424580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:13.930 [2024-10-11 09:50:58.431097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.930 [2024-10-11 09:50:58.431162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.930 [2024-10-11 09:50:58.431433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.930 "name": "raid_bdev1", 00:17:13.930 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:13.930 "strip_size_kb": 64, 00:17:13.930 "state": "online", 00:17:13.930 "raid_level": "raid5f", 00:17:13.930 "superblock": true, 00:17:13.930 "num_base_bdevs": 3, 00:17:13.930 "num_base_bdevs_discovered": 3, 00:17:13.930 "num_base_bdevs_operational": 3, 00:17:13.930 "base_bdevs_list": [ 00:17:13.930 { 00:17:13.930 "name": "BaseBdev1", 00:17:13.930 "uuid": "c321994a-f646-5fb2-ae3f-7fbd00f2e82e", 00:17:13.930 "is_configured": true, 00:17:13.930 "data_offset": 2048, 00:17:13.930 "data_size": 63488 00:17:13.930 }, 00:17:13.930 { 00:17:13.930 "name": "BaseBdev2", 00:17:13.930 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:13.930 "is_configured": true, 00:17:13.930 "data_offset": 2048, 00:17:13.930 "data_size": 63488 00:17:13.930 }, 00:17:13.930 { 00:17:13.930 "name": "BaseBdev3", 00:17:13.930 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:13.930 "is_configured": true, 00:17:13.930 "data_offset": 2048, 00:17:13.930 "data_size": 63488 00:17:13.930 } 00:17:13.930 ] 00:17:13.930 }' 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.930 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.499 [2024-10-11 09:50:58.877125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.499 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:14.500 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.500 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:14.500 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.500 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.500 09:50:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:14.759 [2024-10-11 09:50:59.160478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:14.759 /dev/nbd0 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.759 1+0 records in 00:17:14.759 1+0 records out 00:17:14.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294611 s, 13.9 MB/s 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:14.759 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:15.329 496+0 records in 00:17:15.329 496+0 records out 00:17:15.329 65011712 bytes (65 MB, 62 MiB) copied, 0.452816 s, 144 MB/s 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.329 [2024-10-11 09:50:59.902930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.329 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.330 [2024-10-11 09:50:59.914355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.330 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.589 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.589 "name": "raid_bdev1", 00:17:15.590 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:15.590 "strip_size_kb": 64, 00:17:15.590 "state": "online", 00:17:15.590 "raid_level": "raid5f", 00:17:15.590 "superblock": true, 00:17:15.590 "num_base_bdevs": 3, 00:17:15.590 "num_base_bdevs_discovered": 2, 00:17:15.590 "num_base_bdevs_operational": 2, 00:17:15.590 "base_bdevs_list": [ 00:17:15.590 { 00:17:15.590 "name": null, 00:17:15.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.590 "is_configured": false, 00:17:15.590 "data_offset": 0, 00:17:15.590 "data_size": 63488 00:17:15.590 }, 00:17:15.590 { 00:17:15.590 "name": "BaseBdev2", 00:17:15.590 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:15.590 "is_configured": true, 00:17:15.590 "data_offset": 2048, 00:17:15.590 "data_size": 63488 00:17:15.590 }, 00:17:15.590 { 00:17:15.590 "name": "BaseBdev3", 00:17:15.590 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:15.590 "is_configured": true, 00:17:15.590 "data_offset": 2048, 00:17:15.590 "data_size": 63488 00:17:15.590 } 00:17:15.590 ] 00:17:15.590 }' 00:17:15.590 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.590 09:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.849 09:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.849 09:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.849 09:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.849 [2024-10-11 09:51:00.393576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.849 [2024-10-11 09:51:00.413834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:15.849 09:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.849 09:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:15.849 [2024-10-11 09:51:00.423501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.226 "name": "raid_bdev1", 00:17:17.226 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:17.226 "strip_size_kb": 64, 00:17:17.226 "state": "online", 00:17:17.226 "raid_level": "raid5f", 00:17:17.226 "superblock": true, 00:17:17.226 "num_base_bdevs": 3, 00:17:17.226 "num_base_bdevs_discovered": 3, 00:17:17.226 "num_base_bdevs_operational": 3, 00:17:17.226 "process": { 00:17:17.226 "type": "rebuild", 00:17:17.226 "target": "spare", 00:17:17.226 "progress": { 00:17:17.226 "blocks": 20480, 00:17:17.226 "percent": 16 00:17:17.226 } 00:17:17.226 }, 00:17:17.226 "base_bdevs_list": [ 00:17:17.226 { 00:17:17.226 "name": "spare", 00:17:17.226 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:17.226 "is_configured": true, 00:17:17.226 "data_offset": 2048, 00:17:17.226 "data_size": 63488 00:17:17.226 }, 00:17:17.226 { 00:17:17.226 "name": "BaseBdev2", 00:17:17.226 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:17.226 "is_configured": true, 00:17:17.226 "data_offset": 2048, 00:17:17.226 "data_size": 63488 00:17:17.226 }, 00:17:17.226 { 00:17:17.226 "name": "BaseBdev3", 00:17:17.226 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:17.226 "is_configured": true, 00:17:17.226 "data_offset": 2048, 00:17:17.226 "data_size": 63488 00:17:17.226 } 00:17:17.226 ] 00:17:17.226 }' 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.226 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 [2024-10-11 09:51:01.554610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.226 [2024-10-11 09:51:01.633484] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.226 [2024-10-11 09:51:01.633599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.226 [2024-10-11 09:51:01.633640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.227 [2024-10-11 09:51:01.633663] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.227 "name": "raid_bdev1", 00:17:17.227 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:17.227 "strip_size_kb": 64, 00:17:17.227 "state": "online", 00:17:17.227 "raid_level": "raid5f", 00:17:17.227 "superblock": true, 00:17:17.227 "num_base_bdevs": 3, 00:17:17.227 "num_base_bdevs_discovered": 2, 00:17:17.227 "num_base_bdevs_operational": 2, 00:17:17.227 "base_bdevs_list": [ 00:17:17.227 { 00:17:17.227 "name": null, 00:17:17.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.227 "is_configured": false, 00:17:17.227 "data_offset": 0, 00:17:17.227 "data_size": 63488 00:17:17.227 }, 00:17:17.227 { 00:17:17.227 "name": "BaseBdev2", 00:17:17.227 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:17.227 "is_configured": true, 00:17:17.227 "data_offset": 2048, 00:17:17.227 "data_size": 63488 00:17:17.227 }, 00:17:17.227 { 00:17:17.227 "name": "BaseBdev3", 00:17:17.227 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:17.227 "is_configured": true, 00:17:17.227 "data_offset": 2048, 00:17:17.227 "data_size": 63488 00:17:17.227 } 00:17:17.227 ] 00:17:17.227 }' 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.227 09:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.486 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.745 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.745 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.745 "name": "raid_bdev1", 00:17:17.745 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:17.745 "strip_size_kb": 64, 00:17:17.745 "state": "online", 00:17:17.745 "raid_level": "raid5f", 00:17:17.745 "superblock": true, 00:17:17.745 "num_base_bdevs": 3, 00:17:17.745 "num_base_bdevs_discovered": 2, 00:17:17.745 "num_base_bdevs_operational": 2, 00:17:17.745 "base_bdevs_list": [ 00:17:17.745 { 00:17:17.745 "name": null, 00:17:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.745 "is_configured": false, 00:17:17.745 "data_offset": 0, 00:17:17.746 "data_size": 63488 00:17:17.746 }, 00:17:17.746 { 00:17:17.746 "name": "BaseBdev2", 00:17:17.746 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:17.746 "is_configured": true, 00:17:17.746 "data_offset": 2048, 00:17:17.746 "data_size": 63488 00:17:17.746 }, 00:17:17.746 { 00:17:17.746 "name": "BaseBdev3", 00:17:17.746 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:17.746 "is_configured": true, 00:17:17.746 "data_offset": 2048, 00:17:17.746 "data_size": 63488 00:17:17.746 } 00:17:17.746 ] 00:17:17.746 }' 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.746 [2024-10-11 09:51:02.233011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.746 [2024-10-11 09:51:02.251481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.746 09:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:17.746 [2024-10-11 09:51:02.259562] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.685 "name": "raid_bdev1", 00:17:18.685 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:18.685 "strip_size_kb": 64, 00:17:18.685 "state": "online", 00:17:18.685 "raid_level": "raid5f", 00:17:18.685 "superblock": true, 00:17:18.685 "num_base_bdevs": 3, 00:17:18.685 "num_base_bdevs_discovered": 3, 00:17:18.685 "num_base_bdevs_operational": 3, 00:17:18.685 "process": { 00:17:18.685 "type": "rebuild", 00:17:18.685 "target": "spare", 00:17:18.685 "progress": { 00:17:18.685 "blocks": 20480, 00:17:18.685 "percent": 16 00:17:18.685 } 00:17:18.685 }, 00:17:18.685 "base_bdevs_list": [ 00:17:18.685 { 00:17:18.685 "name": "spare", 00:17:18.685 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:18.685 "is_configured": true, 00:17:18.685 "data_offset": 2048, 00:17:18.685 "data_size": 63488 00:17:18.685 }, 00:17:18.685 { 00:17:18.685 "name": "BaseBdev2", 00:17:18.685 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:18.685 "is_configured": true, 00:17:18.685 "data_offset": 2048, 00:17:18.685 "data_size": 63488 00:17:18.685 }, 00:17:18.685 { 00:17:18.685 "name": "BaseBdev3", 00:17:18.685 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:18.685 "is_configured": true, 00:17:18.685 "data_offset": 2048, 00:17:18.685 "data_size": 63488 00:17:18.685 } 00:17:18.685 ] 00:17:18.685 }' 00:17:18.685 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:18.945 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=579 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.945 "name": "raid_bdev1", 00:17:18.945 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:18.945 "strip_size_kb": 64, 00:17:18.945 "state": "online", 00:17:18.945 "raid_level": "raid5f", 00:17:18.945 "superblock": true, 00:17:18.945 "num_base_bdevs": 3, 00:17:18.945 "num_base_bdevs_discovered": 3, 00:17:18.945 "num_base_bdevs_operational": 3, 00:17:18.945 "process": { 00:17:18.945 "type": "rebuild", 00:17:18.945 "target": "spare", 00:17:18.945 "progress": { 00:17:18.945 "blocks": 22528, 00:17:18.945 "percent": 17 00:17:18.945 } 00:17:18.945 }, 00:17:18.945 "base_bdevs_list": [ 00:17:18.945 { 00:17:18.945 "name": "spare", 00:17:18.945 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:18.945 "is_configured": true, 00:17:18.945 "data_offset": 2048, 00:17:18.945 "data_size": 63488 00:17:18.945 }, 00:17:18.945 { 00:17:18.945 "name": "BaseBdev2", 00:17:18.945 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:18.945 "is_configured": true, 00:17:18.945 "data_offset": 2048, 00:17:18.945 "data_size": 63488 00:17:18.945 }, 00:17:18.945 { 00:17:18.945 "name": "BaseBdev3", 00:17:18.945 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:18.945 "is_configured": true, 00:17:18.945 "data_offset": 2048, 00:17:18.945 "data_size": 63488 00:17:18.945 } 00:17:18.945 ] 00:17:18.945 }' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.945 09:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.323 "name": "raid_bdev1", 00:17:20.323 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:20.323 "strip_size_kb": 64, 00:17:20.323 "state": "online", 00:17:20.323 "raid_level": "raid5f", 00:17:20.323 "superblock": true, 00:17:20.323 "num_base_bdevs": 3, 00:17:20.323 "num_base_bdevs_discovered": 3, 00:17:20.323 "num_base_bdevs_operational": 3, 00:17:20.323 "process": { 00:17:20.323 "type": "rebuild", 00:17:20.323 "target": "spare", 00:17:20.323 "progress": { 00:17:20.323 "blocks": 45056, 00:17:20.323 "percent": 35 00:17:20.323 } 00:17:20.323 }, 00:17:20.323 "base_bdevs_list": [ 00:17:20.323 { 00:17:20.323 "name": "spare", 00:17:20.323 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:20.323 "is_configured": true, 00:17:20.323 "data_offset": 2048, 00:17:20.323 "data_size": 63488 00:17:20.323 }, 00:17:20.323 { 00:17:20.323 "name": "BaseBdev2", 00:17:20.323 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:20.323 "is_configured": true, 00:17:20.323 "data_offset": 2048, 00:17:20.323 "data_size": 63488 00:17:20.323 }, 00:17:20.323 { 00:17:20.323 "name": "BaseBdev3", 00:17:20.323 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:20.323 "is_configured": true, 00:17:20.323 "data_offset": 2048, 00:17:20.323 "data_size": 63488 00:17:20.323 } 00:17:20.323 ] 00:17:20.323 }' 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.323 09:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.259 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.260 "name": "raid_bdev1", 00:17:21.260 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:21.260 "strip_size_kb": 64, 00:17:21.260 "state": "online", 00:17:21.260 "raid_level": "raid5f", 00:17:21.260 "superblock": true, 00:17:21.260 "num_base_bdevs": 3, 00:17:21.260 "num_base_bdevs_discovered": 3, 00:17:21.260 "num_base_bdevs_operational": 3, 00:17:21.260 "process": { 00:17:21.260 "type": "rebuild", 00:17:21.260 "target": "spare", 00:17:21.260 "progress": { 00:17:21.260 "blocks": 69632, 00:17:21.260 "percent": 54 00:17:21.260 } 00:17:21.260 }, 00:17:21.260 "base_bdevs_list": [ 00:17:21.260 { 00:17:21.260 "name": "spare", 00:17:21.260 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:21.260 "is_configured": true, 00:17:21.260 "data_offset": 2048, 00:17:21.260 "data_size": 63488 00:17:21.260 }, 00:17:21.260 { 00:17:21.260 "name": "BaseBdev2", 00:17:21.260 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:21.260 "is_configured": true, 00:17:21.260 "data_offset": 2048, 00:17:21.260 "data_size": 63488 00:17:21.260 }, 00:17:21.260 { 00:17:21.260 "name": "BaseBdev3", 00:17:21.260 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:21.260 "is_configured": true, 00:17:21.260 "data_offset": 2048, 00:17:21.260 "data_size": 63488 00:17:21.260 } 00:17:21.260 ] 00:17:21.260 }' 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.260 09:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.637 "name": "raid_bdev1", 00:17:22.637 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:22.637 "strip_size_kb": 64, 00:17:22.637 "state": "online", 00:17:22.637 "raid_level": "raid5f", 00:17:22.637 "superblock": true, 00:17:22.637 "num_base_bdevs": 3, 00:17:22.637 "num_base_bdevs_discovered": 3, 00:17:22.637 "num_base_bdevs_operational": 3, 00:17:22.637 "process": { 00:17:22.637 "type": "rebuild", 00:17:22.637 "target": "spare", 00:17:22.637 "progress": { 00:17:22.637 "blocks": 92160, 00:17:22.637 "percent": 72 00:17:22.637 } 00:17:22.637 }, 00:17:22.637 "base_bdevs_list": [ 00:17:22.637 { 00:17:22.637 "name": "spare", 00:17:22.637 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:22.637 "is_configured": true, 00:17:22.637 "data_offset": 2048, 00:17:22.637 "data_size": 63488 00:17:22.637 }, 00:17:22.637 { 00:17:22.637 "name": "BaseBdev2", 00:17:22.637 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:22.637 "is_configured": true, 00:17:22.637 "data_offset": 2048, 00:17:22.637 "data_size": 63488 00:17:22.637 }, 00:17:22.637 { 00:17:22.637 "name": "BaseBdev3", 00:17:22.637 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:22.637 "is_configured": true, 00:17:22.637 "data_offset": 2048, 00:17:22.637 "data_size": 63488 00:17:22.637 } 00:17:22.637 ] 00:17:22.637 }' 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.637 09:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.576 09:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.576 "name": "raid_bdev1", 00:17:23.576 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:23.576 "strip_size_kb": 64, 00:17:23.576 "state": "online", 00:17:23.576 "raid_level": "raid5f", 00:17:23.576 "superblock": true, 00:17:23.576 "num_base_bdevs": 3, 00:17:23.576 "num_base_bdevs_discovered": 3, 00:17:23.576 "num_base_bdevs_operational": 3, 00:17:23.576 "process": { 00:17:23.576 "type": "rebuild", 00:17:23.576 "target": "spare", 00:17:23.576 "progress": { 00:17:23.576 "blocks": 114688, 00:17:23.576 "percent": 90 00:17:23.576 } 00:17:23.576 }, 00:17:23.576 "base_bdevs_list": [ 00:17:23.576 { 00:17:23.576 "name": "spare", 00:17:23.576 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:23.576 "is_configured": true, 00:17:23.576 "data_offset": 2048, 00:17:23.576 "data_size": 63488 00:17:23.576 }, 00:17:23.576 { 00:17:23.576 "name": "BaseBdev2", 00:17:23.576 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:23.576 "is_configured": true, 00:17:23.576 "data_offset": 2048, 00:17:23.576 "data_size": 63488 00:17:23.576 }, 00:17:23.576 { 00:17:23.576 "name": "BaseBdev3", 00:17:23.576 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:23.576 "is_configured": true, 00:17:23.576 "data_offset": 2048, 00:17:23.576 "data_size": 63488 00:17:23.576 } 00:17:23.576 ] 00:17:23.576 }' 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.576 09:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.145 [2024-10-11 09:51:08.512928] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:24.145 [2024-10-11 09:51:08.513063] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:24.145 [2024-10-11 09:51:08.513242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.713 "name": "raid_bdev1", 00:17:24.713 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:24.713 "strip_size_kb": 64, 00:17:24.713 "state": "online", 00:17:24.713 "raid_level": "raid5f", 00:17:24.713 "superblock": true, 00:17:24.713 "num_base_bdevs": 3, 00:17:24.713 "num_base_bdevs_discovered": 3, 00:17:24.713 "num_base_bdevs_operational": 3, 00:17:24.713 "base_bdevs_list": [ 00:17:24.713 { 00:17:24.713 "name": "spare", 00:17:24.713 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:24.713 "is_configured": true, 00:17:24.713 "data_offset": 2048, 00:17:24.713 "data_size": 63488 00:17:24.713 }, 00:17:24.713 { 00:17:24.713 "name": "BaseBdev2", 00:17:24.713 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:24.713 "is_configured": true, 00:17:24.713 "data_offset": 2048, 00:17:24.713 "data_size": 63488 00:17:24.713 }, 00:17:24.713 { 00:17:24.713 "name": "BaseBdev3", 00:17:24.713 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:24.713 "is_configured": true, 00:17:24.713 "data_offset": 2048, 00:17:24.713 "data_size": 63488 00:17:24.713 } 00:17:24.713 ] 00:17:24.713 }' 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.713 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.973 "name": "raid_bdev1", 00:17:24.973 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:24.973 "strip_size_kb": 64, 00:17:24.973 "state": "online", 00:17:24.973 "raid_level": "raid5f", 00:17:24.973 "superblock": true, 00:17:24.973 "num_base_bdevs": 3, 00:17:24.973 "num_base_bdevs_discovered": 3, 00:17:24.973 "num_base_bdevs_operational": 3, 00:17:24.973 "base_bdevs_list": [ 00:17:24.973 { 00:17:24.973 "name": "spare", 00:17:24.973 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:24.973 "is_configured": true, 00:17:24.973 "data_offset": 2048, 00:17:24.973 "data_size": 63488 00:17:24.973 }, 00:17:24.973 { 00:17:24.973 "name": "BaseBdev2", 00:17:24.973 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:24.973 "is_configured": true, 00:17:24.973 "data_offset": 2048, 00:17:24.973 "data_size": 63488 00:17:24.973 }, 00:17:24.973 { 00:17:24.973 "name": "BaseBdev3", 00:17:24.973 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:24.973 "is_configured": true, 00:17:24.973 "data_offset": 2048, 00:17:24.973 "data_size": 63488 00:17:24.973 } 00:17:24.973 ] 00:17:24.973 }' 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.973 "name": "raid_bdev1", 00:17:24.973 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:24.973 "strip_size_kb": 64, 00:17:24.973 "state": "online", 00:17:24.973 "raid_level": "raid5f", 00:17:24.973 "superblock": true, 00:17:24.973 "num_base_bdevs": 3, 00:17:24.973 "num_base_bdevs_discovered": 3, 00:17:24.973 "num_base_bdevs_operational": 3, 00:17:24.973 "base_bdevs_list": [ 00:17:24.973 { 00:17:24.973 "name": "spare", 00:17:24.973 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:24.973 "is_configured": true, 00:17:24.973 "data_offset": 2048, 00:17:24.973 "data_size": 63488 00:17:24.973 }, 00:17:24.973 { 00:17:24.973 "name": "BaseBdev2", 00:17:24.973 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:24.973 "is_configured": true, 00:17:24.973 "data_offset": 2048, 00:17:24.973 "data_size": 63488 00:17:24.973 }, 00:17:24.973 { 00:17:24.973 "name": "BaseBdev3", 00:17:24.973 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:24.973 "is_configured": true, 00:17:24.973 "data_offset": 2048, 00:17:24.973 "data_size": 63488 00:17:24.973 } 00:17:24.973 ] 00:17:24.973 }' 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.973 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 [2024-10-11 09:51:09.901961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.542 [2024-10-11 09:51:09.902044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.542 [2024-10-11 09:51:09.902166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.542 [2024-10-11 09:51:09.902284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.542 [2024-10-11 09:51:09.902343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.542 09:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:25.542 /dev/nbd0 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.802 1+0 records in 00:17:25.802 1+0 records out 00:17:25.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514538 s, 8.0 MB/s 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.802 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.803 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:25.803 /dev/nbd1 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.062 1+0 records in 00:17:26.062 1+0 records out 00:17:26.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481785 s, 8.5 MB/s 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.062 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.321 09:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.581 [2024-10-11 09:51:11.139272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.581 [2024-10-11 09:51:11.139339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.581 [2024-10-11 09:51:11.139363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:26.581 [2024-10-11 09:51:11.139374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.581 [2024-10-11 09:51:11.142062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.581 [2024-10-11 09:51:11.142106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.581 [2024-10-11 09:51:11.142223] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:26.581 [2024-10-11 09:51:11.142311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.581 [2024-10-11 09:51:11.142475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.581 [2024-10-11 09:51:11.142618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.581 spare 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.581 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.841 [2024-10-11 09:51:11.242551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:26.841 [2024-10-11 09:51:11.242623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:26.841 [2024-10-11 09:51:11.243036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:26.841 [2024-10-11 09:51:11.249530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:26.841 [2024-10-11 09:51:11.249567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:26.841 [2024-10-11 09:51:11.249818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.841 "name": "raid_bdev1", 00:17:26.841 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:26.841 "strip_size_kb": 64, 00:17:26.841 "state": "online", 00:17:26.841 "raid_level": "raid5f", 00:17:26.841 "superblock": true, 00:17:26.841 "num_base_bdevs": 3, 00:17:26.841 "num_base_bdevs_discovered": 3, 00:17:26.841 "num_base_bdevs_operational": 3, 00:17:26.841 "base_bdevs_list": [ 00:17:26.841 { 00:17:26.841 "name": "spare", 00:17:26.841 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:26.841 "is_configured": true, 00:17:26.841 "data_offset": 2048, 00:17:26.841 "data_size": 63488 00:17:26.841 }, 00:17:26.841 { 00:17:26.841 "name": "BaseBdev2", 00:17:26.841 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:26.841 "is_configured": true, 00:17:26.841 "data_offset": 2048, 00:17:26.841 "data_size": 63488 00:17:26.841 }, 00:17:26.841 { 00:17:26.841 "name": "BaseBdev3", 00:17:26.841 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:26.841 "is_configured": true, 00:17:26.841 "data_offset": 2048, 00:17:26.841 "data_size": 63488 00:17:26.841 } 00:17:26.841 ] 00:17:26.841 }' 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.841 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.101 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.361 "name": "raid_bdev1", 00:17:27.361 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:27.361 "strip_size_kb": 64, 00:17:27.361 "state": "online", 00:17:27.361 "raid_level": "raid5f", 00:17:27.361 "superblock": true, 00:17:27.361 "num_base_bdevs": 3, 00:17:27.361 "num_base_bdevs_discovered": 3, 00:17:27.361 "num_base_bdevs_operational": 3, 00:17:27.361 "base_bdevs_list": [ 00:17:27.361 { 00:17:27.361 "name": "spare", 00:17:27.361 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:27.361 "is_configured": true, 00:17:27.361 "data_offset": 2048, 00:17:27.361 "data_size": 63488 00:17:27.361 }, 00:17:27.361 { 00:17:27.361 "name": "BaseBdev2", 00:17:27.361 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:27.361 "is_configured": true, 00:17:27.361 "data_offset": 2048, 00:17:27.361 "data_size": 63488 00:17:27.361 }, 00:17:27.361 { 00:17:27.361 "name": "BaseBdev3", 00:17:27.361 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:27.361 "is_configured": true, 00:17:27.361 "data_offset": 2048, 00:17:27.361 "data_size": 63488 00:17:27.361 } 00:17:27.361 ] 00:17:27.361 }' 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.361 [2024-10-11 09:51:11.894890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.361 "name": "raid_bdev1", 00:17:27.361 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:27.361 "strip_size_kb": 64, 00:17:27.361 "state": "online", 00:17:27.361 "raid_level": "raid5f", 00:17:27.361 "superblock": true, 00:17:27.361 "num_base_bdevs": 3, 00:17:27.361 "num_base_bdevs_discovered": 2, 00:17:27.361 "num_base_bdevs_operational": 2, 00:17:27.361 "base_bdevs_list": [ 00:17:27.361 { 00:17:27.361 "name": null, 00:17:27.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.361 "is_configured": false, 00:17:27.361 "data_offset": 0, 00:17:27.361 "data_size": 63488 00:17:27.361 }, 00:17:27.361 { 00:17:27.361 "name": "BaseBdev2", 00:17:27.361 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:27.361 "is_configured": true, 00:17:27.361 "data_offset": 2048, 00:17:27.361 "data_size": 63488 00:17:27.361 }, 00:17:27.361 { 00:17:27.361 "name": "BaseBdev3", 00:17:27.361 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:27.361 "is_configured": true, 00:17:27.361 "data_offset": 2048, 00:17:27.361 "data_size": 63488 00:17:27.361 } 00:17:27.361 ] 00:17:27.361 }' 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.361 09:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.929 09:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.929 09:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.929 09:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.929 [2024-10-11 09:51:12.342120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.929 [2024-10-11 09:51:12.342320] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.929 [2024-10-11 09:51:12.342338] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:27.929 [2024-10-11 09:51:12.342377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.929 [2024-10-11 09:51:12.360392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:27.929 09:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.929 09:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:27.929 [2024-10-11 09:51:12.368845] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.885 "name": "raid_bdev1", 00:17:28.885 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:28.885 "strip_size_kb": 64, 00:17:28.885 "state": "online", 00:17:28.885 "raid_level": "raid5f", 00:17:28.885 "superblock": true, 00:17:28.885 "num_base_bdevs": 3, 00:17:28.885 "num_base_bdevs_discovered": 3, 00:17:28.885 "num_base_bdevs_operational": 3, 00:17:28.885 "process": { 00:17:28.885 "type": "rebuild", 00:17:28.885 "target": "spare", 00:17:28.885 "progress": { 00:17:28.885 "blocks": 20480, 00:17:28.885 "percent": 16 00:17:28.885 } 00:17:28.885 }, 00:17:28.885 "base_bdevs_list": [ 00:17:28.885 { 00:17:28.885 "name": "spare", 00:17:28.885 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:28.885 "is_configured": true, 00:17:28.885 "data_offset": 2048, 00:17:28.885 "data_size": 63488 00:17:28.885 }, 00:17:28.885 { 00:17:28.885 "name": "BaseBdev2", 00:17:28.885 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:28.885 "is_configured": true, 00:17:28.885 "data_offset": 2048, 00:17:28.885 "data_size": 63488 00:17:28.885 }, 00:17:28.885 { 00:17:28.885 "name": "BaseBdev3", 00:17:28.885 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:28.885 "is_configured": true, 00:17:28.885 "data_offset": 2048, 00:17:28.885 "data_size": 63488 00:17:28.885 } 00:17:28.885 ] 00:17:28.885 }' 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.885 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.154 [2024-10-11 09:51:13.528190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.154 [2024-10-11 09:51:13.580253] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:29.154 [2024-10-11 09:51:13.580348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.154 [2024-10-11 09:51:13.580368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.154 [2024-10-11 09:51:13.580379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.154 "name": "raid_bdev1", 00:17:29.154 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:29.154 "strip_size_kb": 64, 00:17:29.154 "state": "online", 00:17:29.154 "raid_level": "raid5f", 00:17:29.154 "superblock": true, 00:17:29.154 "num_base_bdevs": 3, 00:17:29.154 "num_base_bdevs_discovered": 2, 00:17:29.154 "num_base_bdevs_operational": 2, 00:17:29.154 "base_bdevs_list": [ 00:17:29.154 { 00:17:29.154 "name": null, 00:17:29.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.154 "is_configured": false, 00:17:29.154 "data_offset": 0, 00:17:29.154 "data_size": 63488 00:17:29.154 }, 00:17:29.154 { 00:17:29.154 "name": "BaseBdev2", 00:17:29.154 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:29.154 "is_configured": true, 00:17:29.154 "data_offset": 2048, 00:17:29.154 "data_size": 63488 00:17:29.154 }, 00:17:29.154 { 00:17:29.154 "name": "BaseBdev3", 00:17:29.154 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:29.154 "is_configured": true, 00:17:29.154 "data_offset": 2048, 00:17:29.154 "data_size": 63488 00:17:29.154 } 00:17:29.154 ] 00:17:29.154 }' 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.154 09:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.736 09:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:29.736 09:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.736 09:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.736 [2024-10-11 09:51:14.108086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:29.736 [2024-10-11 09:51:14.108163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.736 [2024-10-11 09:51:14.108188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:29.736 [2024-10-11 09:51:14.108202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.736 [2024-10-11 09:51:14.108779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.736 [2024-10-11 09:51:14.108815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:29.736 [2024-10-11 09:51:14.108934] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:29.736 [2024-10-11 09:51:14.108962] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:29.736 [2024-10-11 09:51:14.108975] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:29.736 [2024-10-11 09:51:14.109001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.736 [2024-10-11 09:51:14.127019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:29.736 spare 00:17:29.736 09:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.736 09:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:29.736 [2024-10-11 09:51:14.134805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.673 "name": "raid_bdev1", 00:17:30.673 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:30.673 "strip_size_kb": 64, 00:17:30.673 "state": "online", 00:17:30.673 "raid_level": "raid5f", 00:17:30.673 "superblock": true, 00:17:30.673 "num_base_bdevs": 3, 00:17:30.673 "num_base_bdevs_discovered": 3, 00:17:30.673 "num_base_bdevs_operational": 3, 00:17:30.673 "process": { 00:17:30.673 "type": "rebuild", 00:17:30.673 "target": "spare", 00:17:30.673 "progress": { 00:17:30.673 "blocks": 20480, 00:17:30.673 "percent": 16 00:17:30.673 } 00:17:30.673 }, 00:17:30.673 "base_bdevs_list": [ 00:17:30.673 { 00:17:30.673 "name": "spare", 00:17:30.673 "uuid": "cde748c9-bea9-5b7e-a264-6e3065af1be2", 00:17:30.673 "is_configured": true, 00:17:30.673 "data_offset": 2048, 00:17:30.673 "data_size": 63488 00:17:30.673 }, 00:17:30.673 { 00:17:30.673 "name": "BaseBdev2", 00:17:30.673 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:30.673 "is_configured": true, 00:17:30.673 "data_offset": 2048, 00:17:30.673 "data_size": 63488 00:17:30.673 }, 00:17:30.673 { 00:17:30.673 "name": "BaseBdev3", 00:17:30.673 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:30.673 "is_configured": true, 00:17:30.673 "data_offset": 2048, 00:17:30.673 "data_size": 63488 00:17:30.673 } 00:17:30.673 ] 00:17:30.673 }' 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.673 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.674 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.674 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.674 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:30.674 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.674 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.674 [2024-10-11 09:51:15.266195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.932 [2024-10-11 09:51:15.345224] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.932 [2024-10-11 09:51:15.345299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.932 [2024-10-11 09:51:15.345318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.932 [2024-10-11 09:51:15.345326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.932 "name": "raid_bdev1", 00:17:30.932 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:30.932 "strip_size_kb": 64, 00:17:30.932 "state": "online", 00:17:30.932 "raid_level": "raid5f", 00:17:30.932 "superblock": true, 00:17:30.932 "num_base_bdevs": 3, 00:17:30.932 "num_base_bdevs_discovered": 2, 00:17:30.932 "num_base_bdevs_operational": 2, 00:17:30.932 "base_bdevs_list": [ 00:17:30.932 { 00:17:30.932 "name": null, 00:17:30.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.932 "is_configured": false, 00:17:30.932 "data_offset": 0, 00:17:30.932 "data_size": 63488 00:17:30.932 }, 00:17:30.932 { 00:17:30.932 "name": "BaseBdev2", 00:17:30.932 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:30.932 "is_configured": true, 00:17:30.932 "data_offset": 2048, 00:17:30.932 "data_size": 63488 00:17:30.932 }, 00:17:30.932 { 00:17:30.932 "name": "BaseBdev3", 00:17:30.932 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:30.932 "is_configured": true, 00:17:30.932 "data_offset": 2048, 00:17:30.932 "data_size": 63488 00:17:30.932 } 00:17:30.932 ] 00:17:30.932 }' 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.932 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.500 "name": "raid_bdev1", 00:17:31.500 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:31.500 "strip_size_kb": 64, 00:17:31.500 "state": "online", 00:17:31.500 "raid_level": "raid5f", 00:17:31.500 "superblock": true, 00:17:31.500 "num_base_bdevs": 3, 00:17:31.500 "num_base_bdevs_discovered": 2, 00:17:31.500 "num_base_bdevs_operational": 2, 00:17:31.500 "base_bdevs_list": [ 00:17:31.500 { 00:17:31.500 "name": null, 00:17:31.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.500 "is_configured": false, 00:17:31.500 "data_offset": 0, 00:17:31.500 "data_size": 63488 00:17:31.500 }, 00:17:31.500 { 00:17:31.500 "name": "BaseBdev2", 00:17:31.500 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:31.500 "is_configured": true, 00:17:31.500 "data_offset": 2048, 00:17:31.500 "data_size": 63488 00:17:31.500 }, 00:17:31.500 { 00:17:31.500 "name": "BaseBdev3", 00:17:31.500 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:31.500 "is_configured": true, 00:17:31.500 "data_offset": 2048, 00:17:31.500 "data_size": 63488 00:17:31.500 } 00:17:31.500 ] 00:17:31.500 }' 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.500 09:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.500 [2024-10-11 09:51:16.004796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:31.500 [2024-10-11 09:51:16.004857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.500 [2024-10-11 09:51:16.004885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:31.500 [2024-10-11 09:51:16.004896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.500 [2024-10-11 09:51:16.005393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.500 [2024-10-11 09:51:16.005421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.500 [2024-10-11 09:51:16.005509] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:31.500 [2024-10-11 09:51:16.005546] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:31.500 [2024-10-11 09:51:16.005561] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:31.500 [2024-10-11 09:51:16.005572] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:31.500 BaseBdev1 00:17:31.500 09:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.500 09:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.438 "name": "raid_bdev1", 00:17:32.438 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:32.438 "strip_size_kb": 64, 00:17:32.438 "state": "online", 00:17:32.438 "raid_level": "raid5f", 00:17:32.438 "superblock": true, 00:17:32.438 "num_base_bdevs": 3, 00:17:32.438 "num_base_bdevs_discovered": 2, 00:17:32.438 "num_base_bdevs_operational": 2, 00:17:32.438 "base_bdevs_list": [ 00:17:32.438 { 00:17:32.438 "name": null, 00:17:32.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.438 "is_configured": false, 00:17:32.438 "data_offset": 0, 00:17:32.438 "data_size": 63488 00:17:32.438 }, 00:17:32.438 { 00:17:32.438 "name": "BaseBdev2", 00:17:32.438 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:32.438 "is_configured": true, 00:17:32.438 "data_offset": 2048, 00:17:32.438 "data_size": 63488 00:17:32.438 }, 00:17:32.438 { 00:17:32.438 "name": "BaseBdev3", 00:17:32.438 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:32.438 "is_configured": true, 00:17:32.438 "data_offset": 2048, 00:17:32.438 "data_size": 63488 00:17:32.438 } 00:17:32.438 ] 00:17:32.438 }' 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.438 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.007 "name": "raid_bdev1", 00:17:33.007 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:33.007 "strip_size_kb": 64, 00:17:33.007 "state": "online", 00:17:33.007 "raid_level": "raid5f", 00:17:33.007 "superblock": true, 00:17:33.007 "num_base_bdevs": 3, 00:17:33.007 "num_base_bdevs_discovered": 2, 00:17:33.007 "num_base_bdevs_operational": 2, 00:17:33.007 "base_bdevs_list": [ 00:17:33.007 { 00:17:33.007 "name": null, 00:17:33.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.007 "is_configured": false, 00:17:33.007 "data_offset": 0, 00:17:33.007 "data_size": 63488 00:17:33.007 }, 00:17:33.007 { 00:17:33.007 "name": "BaseBdev2", 00:17:33.007 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:33.007 "is_configured": true, 00:17:33.007 "data_offset": 2048, 00:17:33.007 "data_size": 63488 00:17:33.007 }, 00:17:33.007 { 00:17:33.007 "name": "BaseBdev3", 00:17:33.007 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:33.007 "is_configured": true, 00:17:33.007 "data_offset": 2048, 00:17:33.007 "data_size": 63488 00:17:33.007 } 00:17:33.007 ] 00:17:33.007 }' 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.007 [2024-10-11 09:51:17.630212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.007 [2024-10-11 09:51:17.630436] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:33.007 [2024-10-11 09:51:17.630454] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:33.007 request: 00:17:33.007 { 00:17:33.007 "base_bdev": "BaseBdev1", 00:17:33.007 "raid_bdev": "raid_bdev1", 00:17:33.007 "method": "bdev_raid_add_base_bdev", 00:17:33.007 "req_id": 1 00:17:33.007 } 00:17:33.007 Got JSON-RPC error response 00:17:33.007 response: 00:17:33.007 { 00:17:33.007 "code": -22, 00:17:33.007 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:33.007 } 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:33.007 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:33.267 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.267 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.267 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.267 09:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.225 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.225 "name": "raid_bdev1", 00:17:34.225 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:34.225 "strip_size_kb": 64, 00:17:34.225 "state": "online", 00:17:34.225 "raid_level": "raid5f", 00:17:34.225 "superblock": true, 00:17:34.225 "num_base_bdevs": 3, 00:17:34.225 "num_base_bdevs_discovered": 2, 00:17:34.225 "num_base_bdevs_operational": 2, 00:17:34.225 "base_bdevs_list": [ 00:17:34.225 { 00:17:34.225 "name": null, 00:17:34.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.225 "is_configured": false, 00:17:34.225 "data_offset": 0, 00:17:34.225 "data_size": 63488 00:17:34.225 }, 00:17:34.225 { 00:17:34.225 "name": "BaseBdev2", 00:17:34.225 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:34.225 "is_configured": true, 00:17:34.225 "data_offset": 2048, 00:17:34.225 "data_size": 63488 00:17:34.225 }, 00:17:34.225 { 00:17:34.225 "name": "BaseBdev3", 00:17:34.225 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:34.225 "is_configured": true, 00:17:34.225 "data_offset": 2048, 00:17:34.225 "data_size": 63488 00:17:34.225 } 00:17:34.225 ] 00:17:34.225 }' 00:17:34.226 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.226 09:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.794 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.794 "name": "raid_bdev1", 00:17:34.794 "uuid": "52ec1261-a8df-4b1f-8c68-ffedb9af42ac", 00:17:34.794 "strip_size_kb": 64, 00:17:34.794 "state": "online", 00:17:34.794 "raid_level": "raid5f", 00:17:34.794 "superblock": true, 00:17:34.794 "num_base_bdevs": 3, 00:17:34.794 "num_base_bdevs_discovered": 2, 00:17:34.794 "num_base_bdevs_operational": 2, 00:17:34.794 "base_bdevs_list": [ 00:17:34.794 { 00:17:34.794 "name": null, 00:17:34.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.794 "is_configured": false, 00:17:34.794 "data_offset": 0, 00:17:34.794 "data_size": 63488 00:17:34.794 }, 00:17:34.794 { 00:17:34.794 "name": "BaseBdev2", 00:17:34.794 "uuid": "a61f3cfe-9a55-5903-8660-3cea9810ace3", 00:17:34.794 "is_configured": true, 00:17:34.794 "data_offset": 2048, 00:17:34.794 "data_size": 63488 00:17:34.794 }, 00:17:34.794 { 00:17:34.794 "name": "BaseBdev3", 00:17:34.794 "uuid": "cf97d8b7-2f69-53dd-a097-341d806ce5f2", 00:17:34.794 "is_configured": true, 00:17:34.795 "data_offset": 2048, 00:17:34.795 "data_size": 63488 00:17:34.795 } 00:17:34.795 ] 00:17:34.795 }' 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82591 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82591 ']' 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82591 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82591 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.795 killing process with pid 82591 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82591' 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82591 00:17:34.795 Received shutdown signal, test time was about 60.000000 seconds 00:17:34.795 00:17:34.795 Latency(us) 00:17:34.795 [2024-10-11T09:51:19.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.795 [2024-10-11T09:51:19.427Z] =================================================================================================================== 00:17:34.795 [2024-10-11T09:51:19.427Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.795 [2024-10-11 09:51:19.310855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.795 09:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82591 00:17:34.795 [2024-10-11 09:51:19.311010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.795 [2024-10-11 09:51:19.311085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.795 [2024-10-11 09:51:19.311101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:35.364 [2024-10-11 09:51:19.704494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.300 09:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:36.300 00:17:36.300 real 0m23.589s 00:17:36.300 user 0m30.290s 00:17:36.300 sys 0m2.904s 00:17:36.300 09:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:36.300 ************************************ 00:17:36.300 END TEST raid5f_rebuild_test_sb 00:17:36.300 ************************************ 00:17:36.300 09:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.300 09:51:20 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:36.300 09:51:20 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:36.300 09:51:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:36.300 09:51:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.300 09:51:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.300 ************************************ 00:17:36.300 START TEST raid5f_state_function_test 00:17:36.300 ************************************ 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83344 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:36.300 Process raid pid: 83344 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83344' 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83344 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83344 ']' 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.300 09:51:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.558 [2024-10-11 09:51:20.960436] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:17:36.558 [2024-10-11 09:51:20.960554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.558 [2024-10-11 09:51:21.125800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.816 [2024-10-11 09:51:21.257403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.075 [2024-10-11 09:51:21.483248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.075 [2024-10-11 09:51:21.483306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.334 [2024-10-11 09:51:21.812486] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.334 [2024-10-11 09:51:21.812543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.334 [2024-10-11 09:51:21.812554] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.334 [2024-10-11 09:51:21.812563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.334 [2024-10-11 09:51:21.812570] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.334 [2024-10-11 09:51:21.812579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.334 [2024-10-11 09:51:21.812585] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:37.334 [2024-10-11 09:51:21.812594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.334 "name": "Existed_Raid", 00:17:37.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.334 "strip_size_kb": 64, 00:17:37.334 "state": "configuring", 00:17:37.334 "raid_level": "raid5f", 00:17:37.334 "superblock": false, 00:17:37.334 "num_base_bdevs": 4, 00:17:37.334 "num_base_bdevs_discovered": 0, 00:17:37.334 "num_base_bdevs_operational": 4, 00:17:37.334 "base_bdevs_list": [ 00:17:37.334 { 00:17:37.334 "name": "BaseBdev1", 00:17:37.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.334 "is_configured": false, 00:17:37.334 "data_offset": 0, 00:17:37.334 "data_size": 0 00:17:37.334 }, 00:17:37.334 { 00:17:37.334 "name": "BaseBdev2", 00:17:37.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.334 "is_configured": false, 00:17:37.334 "data_offset": 0, 00:17:37.334 "data_size": 0 00:17:37.334 }, 00:17:37.334 { 00:17:37.334 "name": "BaseBdev3", 00:17:37.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.334 "is_configured": false, 00:17:37.334 "data_offset": 0, 00:17:37.334 "data_size": 0 00:17:37.334 }, 00:17:37.334 { 00:17:37.334 "name": "BaseBdev4", 00:17:37.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.334 "is_configured": false, 00:17:37.334 "data_offset": 0, 00:17:37.334 "data_size": 0 00:17:37.334 } 00:17:37.334 ] 00:17:37.334 }' 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.334 09:51:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 [2024-10-11 09:51:22.315569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.903 [2024-10-11 09:51:22.315614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 [2024-10-11 09:51:22.327551] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.903 [2024-10-11 09:51:22.327595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.903 [2024-10-11 09:51:22.327603] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.903 [2024-10-11 09:51:22.327612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.903 [2024-10-11 09:51:22.327618] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.903 [2024-10-11 09:51:22.327626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.903 [2024-10-11 09:51:22.327632] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:37.903 [2024-10-11 09:51:22.327640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 [2024-10-11 09:51:22.380143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.903 BaseBdev1 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 [ 00:17:37.903 { 00:17:37.903 "name": "BaseBdev1", 00:17:37.903 "aliases": [ 00:17:37.903 "4e5b3dbb-a8a6-4327-8354-59ebe05137f2" 00:17:37.903 ], 00:17:37.903 "product_name": "Malloc disk", 00:17:37.903 "block_size": 512, 00:17:37.903 "num_blocks": 65536, 00:17:37.903 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:37.903 "assigned_rate_limits": { 00:17:37.903 "rw_ios_per_sec": 0, 00:17:37.903 "rw_mbytes_per_sec": 0, 00:17:37.903 "r_mbytes_per_sec": 0, 00:17:37.903 "w_mbytes_per_sec": 0 00:17:37.903 }, 00:17:37.903 "claimed": true, 00:17:37.903 "claim_type": "exclusive_write", 00:17:37.903 "zoned": false, 00:17:37.903 "supported_io_types": { 00:17:37.903 "read": true, 00:17:37.903 "write": true, 00:17:37.903 "unmap": true, 00:17:37.903 "flush": true, 00:17:37.903 "reset": true, 00:17:37.903 "nvme_admin": false, 00:17:37.903 "nvme_io": false, 00:17:37.903 "nvme_io_md": false, 00:17:37.903 "write_zeroes": true, 00:17:37.903 "zcopy": true, 00:17:37.903 "get_zone_info": false, 00:17:37.903 "zone_management": false, 00:17:37.903 "zone_append": false, 00:17:37.903 "compare": false, 00:17:37.903 "compare_and_write": false, 00:17:37.903 "abort": true, 00:17:37.903 "seek_hole": false, 00:17:37.903 "seek_data": false, 00:17:37.903 "copy": true, 00:17:37.903 "nvme_iov_md": false 00:17:37.903 }, 00:17:37.903 "memory_domains": [ 00:17:37.903 { 00:17:37.903 "dma_device_id": "system", 00:17:37.903 "dma_device_type": 1 00:17:37.903 }, 00:17:37.903 { 00:17:37.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.903 "dma_device_type": 2 00:17:37.903 } 00:17:37.903 ], 00:17:37.903 "driver_specific": {} 00:17:37.903 } 00:17:37.903 ] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.903 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.903 "name": "Existed_Raid", 00:17:37.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.903 "strip_size_kb": 64, 00:17:37.903 "state": "configuring", 00:17:37.903 "raid_level": "raid5f", 00:17:37.903 "superblock": false, 00:17:37.903 "num_base_bdevs": 4, 00:17:37.903 "num_base_bdevs_discovered": 1, 00:17:37.903 "num_base_bdevs_operational": 4, 00:17:37.903 "base_bdevs_list": [ 00:17:37.903 { 00:17:37.903 "name": "BaseBdev1", 00:17:37.903 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:37.903 "is_configured": true, 00:17:37.903 "data_offset": 0, 00:17:37.904 "data_size": 65536 00:17:37.904 }, 00:17:37.904 { 00:17:37.904 "name": "BaseBdev2", 00:17:37.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.904 "is_configured": false, 00:17:37.904 "data_offset": 0, 00:17:37.904 "data_size": 0 00:17:37.904 }, 00:17:37.904 { 00:17:37.904 "name": "BaseBdev3", 00:17:37.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.904 "is_configured": false, 00:17:37.904 "data_offset": 0, 00:17:37.904 "data_size": 0 00:17:37.904 }, 00:17:37.904 { 00:17:37.904 "name": "BaseBdev4", 00:17:37.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.904 "is_configured": false, 00:17:37.904 "data_offset": 0, 00:17:37.904 "data_size": 0 00:17:37.904 } 00:17:37.904 ] 00:17:37.904 }' 00:17:37.904 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.904 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.471 [2024-10-11 09:51:22.883368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.471 [2024-10-11 09:51:22.883424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.471 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.471 [2024-10-11 09:51:22.895394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.471 [2024-10-11 09:51:22.897214] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.471 [2024-10-11 09:51:22.897256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.471 [2024-10-11 09:51:22.897266] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.471 [2024-10-11 09:51:22.897275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.471 [2024-10-11 09:51:22.897282] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.472 [2024-10-11 09:51:22.897290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.472 "name": "Existed_Raid", 00:17:38.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.472 "strip_size_kb": 64, 00:17:38.472 "state": "configuring", 00:17:38.472 "raid_level": "raid5f", 00:17:38.472 "superblock": false, 00:17:38.472 "num_base_bdevs": 4, 00:17:38.472 "num_base_bdevs_discovered": 1, 00:17:38.472 "num_base_bdevs_operational": 4, 00:17:38.472 "base_bdevs_list": [ 00:17:38.472 { 00:17:38.472 "name": "BaseBdev1", 00:17:38.472 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:38.472 "is_configured": true, 00:17:38.472 "data_offset": 0, 00:17:38.472 "data_size": 65536 00:17:38.472 }, 00:17:38.472 { 00:17:38.472 "name": "BaseBdev2", 00:17:38.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.472 "is_configured": false, 00:17:38.472 "data_offset": 0, 00:17:38.472 "data_size": 0 00:17:38.472 }, 00:17:38.472 { 00:17:38.472 "name": "BaseBdev3", 00:17:38.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.472 "is_configured": false, 00:17:38.472 "data_offset": 0, 00:17:38.472 "data_size": 0 00:17:38.472 }, 00:17:38.472 { 00:17:38.472 "name": "BaseBdev4", 00:17:38.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.472 "is_configured": false, 00:17:38.472 "data_offset": 0, 00:17:38.472 "data_size": 0 00:17:38.472 } 00:17:38.472 ] 00:17:38.472 }' 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.472 09:51:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.041 [2024-10-11 09:51:23.412349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.041 BaseBdev2 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.041 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.041 [ 00:17:39.041 { 00:17:39.041 "name": "BaseBdev2", 00:17:39.041 "aliases": [ 00:17:39.041 "aed12182-a90d-46fb-91bf-fa3b823ed12e" 00:17:39.041 ], 00:17:39.041 "product_name": "Malloc disk", 00:17:39.041 "block_size": 512, 00:17:39.041 "num_blocks": 65536, 00:17:39.041 "uuid": "aed12182-a90d-46fb-91bf-fa3b823ed12e", 00:17:39.041 "assigned_rate_limits": { 00:17:39.041 "rw_ios_per_sec": 0, 00:17:39.041 "rw_mbytes_per_sec": 0, 00:17:39.041 "r_mbytes_per_sec": 0, 00:17:39.041 "w_mbytes_per_sec": 0 00:17:39.041 }, 00:17:39.041 "claimed": true, 00:17:39.041 "claim_type": "exclusive_write", 00:17:39.041 "zoned": false, 00:17:39.041 "supported_io_types": { 00:17:39.041 "read": true, 00:17:39.041 "write": true, 00:17:39.041 "unmap": true, 00:17:39.041 "flush": true, 00:17:39.041 "reset": true, 00:17:39.041 "nvme_admin": false, 00:17:39.041 "nvme_io": false, 00:17:39.041 "nvme_io_md": false, 00:17:39.041 "write_zeroes": true, 00:17:39.041 "zcopy": true, 00:17:39.041 "get_zone_info": false, 00:17:39.041 "zone_management": false, 00:17:39.041 "zone_append": false, 00:17:39.041 "compare": false, 00:17:39.041 "compare_and_write": false, 00:17:39.041 "abort": true, 00:17:39.041 "seek_hole": false, 00:17:39.041 "seek_data": false, 00:17:39.041 "copy": true, 00:17:39.041 "nvme_iov_md": false 00:17:39.041 }, 00:17:39.041 "memory_domains": [ 00:17:39.041 { 00:17:39.041 "dma_device_id": "system", 00:17:39.041 "dma_device_type": 1 00:17:39.041 }, 00:17:39.041 { 00:17:39.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.042 "dma_device_type": 2 00:17:39.042 } 00:17:39.042 ], 00:17:39.042 "driver_specific": {} 00:17:39.042 } 00:17:39.042 ] 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.042 "name": "Existed_Raid", 00:17:39.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.042 "strip_size_kb": 64, 00:17:39.042 "state": "configuring", 00:17:39.042 "raid_level": "raid5f", 00:17:39.042 "superblock": false, 00:17:39.042 "num_base_bdevs": 4, 00:17:39.042 "num_base_bdevs_discovered": 2, 00:17:39.042 "num_base_bdevs_operational": 4, 00:17:39.042 "base_bdevs_list": [ 00:17:39.042 { 00:17:39.042 "name": "BaseBdev1", 00:17:39.042 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:39.042 "is_configured": true, 00:17:39.042 "data_offset": 0, 00:17:39.042 "data_size": 65536 00:17:39.042 }, 00:17:39.042 { 00:17:39.042 "name": "BaseBdev2", 00:17:39.042 "uuid": "aed12182-a90d-46fb-91bf-fa3b823ed12e", 00:17:39.042 "is_configured": true, 00:17:39.042 "data_offset": 0, 00:17:39.042 "data_size": 65536 00:17:39.042 }, 00:17:39.042 { 00:17:39.042 "name": "BaseBdev3", 00:17:39.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.042 "is_configured": false, 00:17:39.042 "data_offset": 0, 00:17:39.042 "data_size": 0 00:17:39.042 }, 00:17:39.042 { 00:17:39.042 "name": "BaseBdev4", 00:17:39.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.042 "is_configured": false, 00:17:39.042 "data_offset": 0, 00:17:39.042 "data_size": 0 00:17:39.042 } 00:17:39.042 ] 00:17:39.042 }' 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.042 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.301 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:39.301 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.301 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.561 [2024-10-11 09:51:23.933676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.561 BaseBdev3 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.561 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.561 [ 00:17:39.561 { 00:17:39.561 "name": "BaseBdev3", 00:17:39.561 "aliases": [ 00:17:39.561 "6bb9433c-8932-4f94-8965-262c94f10ecc" 00:17:39.561 ], 00:17:39.561 "product_name": "Malloc disk", 00:17:39.561 "block_size": 512, 00:17:39.561 "num_blocks": 65536, 00:17:39.561 "uuid": "6bb9433c-8932-4f94-8965-262c94f10ecc", 00:17:39.561 "assigned_rate_limits": { 00:17:39.561 "rw_ios_per_sec": 0, 00:17:39.561 "rw_mbytes_per_sec": 0, 00:17:39.561 "r_mbytes_per_sec": 0, 00:17:39.561 "w_mbytes_per_sec": 0 00:17:39.561 }, 00:17:39.561 "claimed": true, 00:17:39.561 "claim_type": "exclusive_write", 00:17:39.561 "zoned": false, 00:17:39.561 "supported_io_types": { 00:17:39.561 "read": true, 00:17:39.561 "write": true, 00:17:39.562 "unmap": true, 00:17:39.562 "flush": true, 00:17:39.562 "reset": true, 00:17:39.562 "nvme_admin": false, 00:17:39.562 "nvme_io": false, 00:17:39.562 "nvme_io_md": false, 00:17:39.562 "write_zeroes": true, 00:17:39.562 "zcopy": true, 00:17:39.562 "get_zone_info": false, 00:17:39.562 "zone_management": false, 00:17:39.562 "zone_append": false, 00:17:39.562 "compare": false, 00:17:39.562 "compare_and_write": false, 00:17:39.562 "abort": true, 00:17:39.562 "seek_hole": false, 00:17:39.562 "seek_data": false, 00:17:39.562 "copy": true, 00:17:39.562 "nvme_iov_md": false 00:17:39.562 }, 00:17:39.562 "memory_domains": [ 00:17:39.562 { 00:17:39.562 "dma_device_id": "system", 00:17:39.562 "dma_device_type": 1 00:17:39.562 }, 00:17:39.562 { 00:17:39.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.562 "dma_device_type": 2 00:17:39.562 } 00:17:39.562 ], 00:17:39.562 "driver_specific": {} 00:17:39.562 } 00:17:39.562 ] 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.562 09:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.562 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.562 "name": "Existed_Raid", 00:17:39.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.562 "strip_size_kb": 64, 00:17:39.562 "state": "configuring", 00:17:39.562 "raid_level": "raid5f", 00:17:39.562 "superblock": false, 00:17:39.562 "num_base_bdevs": 4, 00:17:39.562 "num_base_bdevs_discovered": 3, 00:17:39.562 "num_base_bdevs_operational": 4, 00:17:39.562 "base_bdevs_list": [ 00:17:39.562 { 00:17:39.562 "name": "BaseBdev1", 00:17:39.562 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:39.562 "is_configured": true, 00:17:39.562 "data_offset": 0, 00:17:39.562 "data_size": 65536 00:17:39.562 }, 00:17:39.562 { 00:17:39.562 "name": "BaseBdev2", 00:17:39.562 "uuid": "aed12182-a90d-46fb-91bf-fa3b823ed12e", 00:17:39.562 "is_configured": true, 00:17:39.562 "data_offset": 0, 00:17:39.562 "data_size": 65536 00:17:39.562 }, 00:17:39.562 { 00:17:39.562 "name": "BaseBdev3", 00:17:39.562 "uuid": "6bb9433c-8932-4f94-8965-262c94f10ecc", 00:17:39.562 "is_configured": true, 00:17:39.562 "data_offset": 0, 00:17:39.562 "data_size": 65536 00:17:39.562 }, 00:17:39.562 { 00:17:39.562 "name": "BaseBdev4", 00:17:39.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.562 "is_configured": false, 00:17:39.562 "data_offset": 0, 00:17:39.562 "data_size": 0 00:17:39.562 } 00:17:39.562 ] 00:17:39.562 }' 00:17:39.562 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.562 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.205 [2024-10-11 09:51:24.517608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:40.205 [2024-10-11 09:51:24.517677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:40.205 [2024-10-11 09:51:24.517687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:40.205 [2024-10-11 09:51:24.517970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:40.205 [2024-10-11 09:51:24.526696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:40.205 [2024-10-11 09:51:24.526725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:40.205 [2024-10-11 09:51:24.527052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.205 BaseBdev4 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.205 [ 00:17:40.205 { 00:17:40.205 "name": "BaseBdev4", 00:17:40.205 "aliases": [ 00:17:40.205 "f0a05f23-4282-4783-986a-1cf04178b0c0" 00:17:40.205 ], 00:17:40.205 "product_name": "Malloc disk", 00:17:40.205 "block_size": 512, 00:17:40.205 "num_blocks": 65536, 00:17:40.205 "uuid": "f0a05f23-4282-4783-986a-1cf04178b0c0", 00:17:40.205 "assigned_rate_limits": { 00:17:40.205 "rw_ios_per_sec": 0, 00:17:40.205 "rw_mbytes_per_sec": 0, 00:17:40.205 "r_mbytes_per_sec": 0, 00:17:40.205 "w_mbytes_per_sec": 0 00:17:40.205 }, 00:17:40.205 "claimed": true, 00:17:40.205 "claim_type": "exclusive_write", 00:17:40.205 "zoned": false, 00:17:40.205 "supported_io_types": { 00:17:40.205 "read": true, 00:17:40.205 "write": true, 00:17:40.205 "unmap": true, 00:17:40.205 "flush": true, 00:17:40.205 "reset": true, 00:17:40.205 "nvme_admin": false, 00:17:40.205 "nvme_io": false, 00:17:40.205 "nvme_io_md": false, 00:17:40.205 "write_zeroes": true, 00:17:40.205 "zcopy": true, 00:17:40.205 "get_zone_info": false, 00:17:40.205 "zone_management": false, 00:17:40.205 "zone_append": false, 00:17:40.205 "compare": false, 00:17:40.205 "compare_and_write": false, 00:17:40.205 "abort": true, 00:17:40.205 "seek_hole": false, 00:17:40.205 "seek_data": false, 00:17:40.205 "copy": true, 00:17:40.205 "nvme_iov_md": false 00:17:40.205 }, 00:17:40.205 "memory_domains": [ 00:17:40.205 { 00:17:40.205 "dma_device_id": "system", 00:17:40.205 "dma_device_type": 1 00:17:40.205 }, 00:17:40.205 { 00:17:40.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.205 "dma_device_type": 2 00:17:40.205 } 00:17:40.205 ], 00:17:40.205 "driver_specific": {} 00:17:40.205 } 00:17:40.205 ] 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.205 "name": "Existed_Raid", 00:17:40.205 "uuid": "f79767f2-2c19-4ee6-96eb-dd40c77f919a", 00:17:40.205 "strip_size_kb": 64, 00:17:40.205 "state": "online", 00:17:40.205 "raid_level": "raid5f", 00:17:40.205 "superblock": false, 00:17:40.205 "num_base_bdevs": 4, 00:17:40.205 "num_base_bdevs_discovered": 4, 00:17:40.205 "num_base_bdevs_operational": 4, 00:17:40.205 "base_bdevs_list": [ 00:17:40.205 { 00:17:40.205 "name": "BaseBdev1", 00:17:40.205 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:40.205 "is_configured": true, 00:17:40.205 "data_offset": 0, 00:17:40.205 "data_size": 65536 00:17:40.205 }, 00:17:40.205 { 00:17:40.205 "name": "BaseBdev2", 00:17:40.205 "uuid": "aed12182-a90d-46fb-91bf-fa3b823ed12e", 00:17:40.205 "is_configured": true, 00:17:40.205 "data_offset": 0, 00:17:40.205 "data_size": 65536 00:17:40.205 }, 00:17:40.205 { 00:17:40.205 "name": "BaseBdev3", 00:17:40.205 "uuid": "6bb9433c-8932-4f94-8965-262c94f10ecc", 00:17:40.205 "is_configured": true, 00:17:40.205 "data_offset": 0, 00:17:40.205 "data_size": 65536 00:17:40.205 }, 00:17:40.205 { 00:17:40.205 "name": "BaseBdev4", 00:17:40.205 "uuid": "f0a05f23-4282-4783-986a-1cf04178b0c0", 00:17:40.205 "is_configured": true, 00:17:40.205 "data_offset": 0, 00:17:40.205 "data_size": 65536 00:17:40.205 } 00:17:40.205 ] 00:17:40.205 }' 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.205 09:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.465 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 [2024-10-11 09:51:25.074894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.724 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.724 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:40.724 "name": "Existed_Raid", 00:17:40.724 "aliases": [ 00:17:40.724 "f79767f2-2c19-4ee6-96eb-dd40c77f919a" 00:17:40.724 ], 00:17:40.724 "product_name": "Raid Volume", 00:17:40.724 "block_size": 512, 00:17:40.724 "num_blocks": 196608, 00:17:40.724 "uuid": "f79767f2-2c19-4ee6-96eb-dd40c77f919a", 00:17:40.724 "assigned_rate_limits": { 00:17:40.724 "rw_ios_per_sec": 0, 00:17:40.724 "rw_mbytes_per_sec": 0, 00:17:40.724 "r_mbytes_per_sec": 0, 00:17:40.724 "w_mbytes_per_sec": 0 00:17:40.725 }, 00:17:40.725 "claimed": false, 00:17:40.725 "zoned": false, 00:17:40.725 "supported_io_types": { 00:17:40.725 "read": true, 00:17:40.725 "write": true, 00:17:40.725 "unmap": false, 00:17:40.725 "flush": false, 00:17:40.725 "reset": true, 00:17:40.725 "nvme_admin": false, 00:17:40.725 "nvme_io": false, 00:17:40.725 "nvme_io_md": false, 00:17:40.725 "write_zeroes": true, 00:17:40.725 "zcopy": false, 00:17:40.725 "get_zone_info": false, 00:17:40.725 "zone_management": false, 00:17:40.725 "zone_append": false, 00:17:40.725 "compare": false, 00:17:40.725 "compare_and_write": false, 00:17:40.725 "abort": false, 00:17:40.725 "seek_hole": false, 00:17:40.725 "seek_data": false, 00:17:40.725 "copy": false, 00:17:40.725 "nvme_iov_md": false 00:17:40.725 }, 00:17:40.725 "driver_specific": { 00:17:40.725 "raid": { 00:17:40.725 "uuid": "f79767f2-2c19-4ee6-96eb-dd40c77f919a", 00:17:40.725 "strip_size_kb": 64, 00:17:40.725 "state": "online", 00:17:40.725 "raid_level": "raid5f", 00:17:40.725 "superblock": false, 00:17:40.725 "num_base_bdevs": 4, 00:17:40.725 "num_base_bdevs_discovered": 4, 00:17:40.725 "num_base_bdevs_operational": 4, 00:17:40.725 "base_bdevs_list": [ 00:17:40.725 { 00:17:40.725 "name": "BaseBdev1", 00:17:40.725 "uuid": "4e5b3dbb-a8a6-4327-8354-59ebe05137f2", 00:17:40.725 "is_configured": true, 00:17:40.725 "data_offset": 0, 00:17:40.725 "data_size": 65536 00:17:40.725 }, 00:17:40.725 { 00:17:40.725 "name": "BaseBdev2", 00:17:40.725 "uuid": "aed12182-a90d-46fb-91bf-fa3b823ed12e", 00:17:40.725 "is_configured": true, 00:17:40.725 "data_offset": 0, 00:17:40.725 "data_size": 65536 00:17:40.725 }, 00:17:40.725 { 00:17:40.725 "name": "BaseBdev3", 00:17:40.725 "uuid": "6bb9433c-8932-4f94-8965-262c94f10ecc", 00:17:40.725 "is_configured": true, 00:17:40.725 "data_offset": 0, 00:17:40.725 "data_size": 65536 00:17:40.725 }, 00:17:40.725 { 00:17:40.725 "name": "BaseBdev4", 00:17:40.725 "uuid": "f0a05f23-4282-4783-986a-1cf04178b0c0", 00:17:40.725 "is_configured": true, 00:17:40.725 "data_offset": 0, 00:17:40.725 "data_size": 65536 00:17:40.725 } 00:17:40.725 ] 00:17:40.725 } 00:17:40.725 } 00:17:40.725 }' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:40.725 BaseBdev2 00:17:40.725 BaseBdev3 00:17:40.725 BaseBdev4' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.985 [2024-10-11 09:51:25.378143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.985 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.985 "name": "Existed_Raid", 00:17:40.985 "uuid": "f79767f2-2c19-4ee6-96eb-dd40c77f919a", 00:17:40.985 "strip_size_kb": 64, 00:17:40.985 "state": "online", 00:17:40.985 "raid_level": "raid5f", 00:17:40.985 "superblock": false, 00:17:40.985 "num_base_bdevs": 4, 00:17:40.985 "num_base_bdevs_discovered": 3, 00:17:40.985 "num_base_bdevs_operational": 3, 00:17:40.985 "base_bdevs_list": [ 00:17:40.985 { 00:17:40.985 "name": null, 00:17:40.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.985 "is_configured": false, 00:17:40.985 "data_offset": 0, 00:17:40.985 "data_size": 65536 00:17:40.985 }, 00:17:40.985 { 00:17:40.985 "name": "BaseBdev2", 00:17:40.985 "uuid": "aed12182-a90d-46fb-91bf-fa3b823ed12e", 00:17:40.985 "is_configured": true, 00:17:40.985 "data_offset": 0, 00:17:40.985 "data_size": 65536 00:17:40.985 }, 00:17:40.985 { 00:17:40.985 "name": "BaseBdev3", 00:17:40.985 "uuid": "6bb9433c-8932-4f94-8965-262c94f10ecc", 00:17:40.985 "is_configured": true, 00:17:40.985 "data_offset": 0, 00:17:40.985 "data_size": 65536 00:17:40.985 }, 00:17:40.985 { 00:17:40.985 "name": "BaseBdev4", 00:17:40.986 "uuid": "f0a05f23-4282-4783-986a-1cf04178b0c0", 00:17:40.986 "is_configured": true, 00:17:40.986 "data_offset": 0, 00:17:40.986 "data_size": 65536 00:17:40.986 } 00:17:40.986 ] 00:17:40.986 }' 00:17:40.986 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.986 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.553 09:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.553 [2024-10-11 09:51:25.955211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.553 [2024-10-11 09:51:25.955321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.553 [2024-10-11 09:51:26.050269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.553 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.553 [2024-10-11 09:51:26.106227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.812 [2024-10-11 09:51:26.258923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:41.812 [2024-10-11 09:51:26.258991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.812 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.072 BaseBdev2 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.072 [ 00:17:42.072 { 00:17:42.072 "name": "BaseBdev2", 00:17:42.072 "aliases": [ 00:17:42.072 "c48b8941-7863-4aa8-a81e-e69f1160a4e0" 00:17:42.072 ], 00:17:42.072 "product_name": "Malloc disk", 00:17:42.072 "block_size": 512, 00:17:42.072 "num_blocks": 65536, 00:17:42.072 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:42.072 "assigned_rate_limits": { 00:17:42.072 "rw_ios_per_sec": 0, 00:17:42.072 "rw_mbytes_per_sec": 0, 00:17:42.072 "r_mbytes_per_sec": 0, 00:17:42.072 "w_mbytes_per_sec": 0 00:17:42.072 }, 00:17:42.072 "claimed": false, 00:17:42.072 "zoned": false, 00:17:42.072 "supported_io_types": { 00:17:42.072 "read": true, 00:17:42.072 "write": true, 00:17:42.072 "unmap": true, 00:17:42.072 "flush": true, 00:17:42.072 "reset": true, 00:17:42.072 "nvme_admin": false, 00:17:42.072 "nvme_io": false, 00:17:42.072 "nvme_io_md": false, 00:17:42.072 "write_zeroes": true, 00:17:42.072 "zcopy": true, 00:17:42.072 "get_zone_info": false, 00:17:42.072 "zone_management": false, 00:17:42.072 "zone_append": false, 00:17:42.072 "compare": false, 00:17:42.072 "compare_and_write": false, 00:17:42.072 "abort": true, 00:17:42.072 "seek_hole": false, 00:17:42.072 "seek_data": false, 00:17:42.072 "copy": true, 00:17:42.072 "nvme_iov_md": false 00:17:42.072 }, 00:17:42.072 "memory_domains": [ 00:17:42.072 { 00:17:42.072 "dma_device_id": "system", 00:17:42.072 "dma_device_type": 1 00:17:42.072 }, 00:17:42.072 { 00:17:42.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.072 "dma_device_type": 2 00:17:42.072 } 00:17:42.072 ], 00:17:42.072 "driver_specific": {} 00:17:42.072 } 00:17:42.072 ] 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.072 BaseBdev3 00:17:42.072 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 [ 00:17:42.073 { 00:17:42.073 "name": "BaseBdev3", 00:17:42.073 "aliases": [ 00:17:42.073 "102be06b-dc5c-47b8-813f-cec2ffa804ef" 00:17:42.073 ], 00:17:42.073 "product_name": "Malloc disk", 00:17:42.073 "block_size": 512, 00:17:42.073 "num_blocks": 65536, 00:17:42.073 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:42.073 "assigned_rate_limits": { 00:17:42.073 "rw_ios_per_sec": 0, 00:17:42.073 "rw_mbytes_per_sec": 0, 00:17:42.073 "r_mbytes_per_sec": 0, 00:17:42.073 "w_mbytes_per_sec": 0 00:17:42.073 }, 00:17:42.073 "claimed": false, 00:17:42.073 "zoned": false, 00:17:42.073 "supported_io_types": { 00:17:42.073 "read": true, 00:17:42.073 "write": true, 00:17:42.073 "unmap": true, 00:17:42.073 "flush": true, 00:17:42.073 "reset": true, 00:17:42.073 "nvme_admin": false, 00:17:42.073 "nvme_io": false, 00:17:42.073 "nvme_io_md": false, 00:17:42.073 "write_zeroes": true, 00:17:42.073 "zcopy": true, 00:17:42.073 "get_zone_info": false, 00:17:42.073 "zone_management": false, 00:17:42.073 "zone_append": false, 00:17:42.073 "compare": false, 00:17:42.073 "compare_and_write": false, 00:17:42.073 "abort": true, 00:17:42.073 "seek_hole": false, 00:17:42.073 "seek_data": false, 00:17:42.073 "copy": true, 00:17:42.073 "nvme_iov_md": false 00:17:42.073 }, 00:17:42.073 "memory_domains": [ 00:17:42.073 { 00:17:42.073 "dma_device_id": "system", 00:17:42.073 "dma_device_type": 1 00:17:42.073 }, 00:17:42.073 { 00:17:42.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.073 "dma_device_type": 2 00:17:42.073 } 00:17:42.073 ], 00:17:42.073 "driver_specific": {} 00:17:42.073 } 00:17:42.073 ] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 BaseBdev4 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 [ 00:17:42.073 { 00:17:42.073 "name": "BaseBdev4", 00:17:42.073 "aliases": [ 00:17:42.073 "5fa211d4-335d-4881-8267-d10ca00278d2" 00:17:42.073 ], 00:17:42.073 "product_name": "Malloc disk", 00:17:42.073 "block_size": 512, 00:17:42.073 "num_blocks": 65536, 00:17:42.073 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:42.073 "assigned_rate_limits": { 00:17:42.073 "rw_ios_per_sec": 0, 00:17:42.073 "rw_mbytes_per_sec": 0, 00:17:42.073 "r_mbytes_per_sec": 0, 00:17:42.073 "w_mbytes_per_sec": 0 00:17:42.073 }, 00:17:42.073 "claimed": false, 00:17:42.073 "zoned": false, 00:17:42.073 "supported_io_types": { 00:17:42.073 "read": true, 00:17:42.073 "write": true, 00:17:42.073 "unmap": true, 00:17:42.073 "flush": true, 00:17:42.073 "reset": true, 00:17:42.073 "nvme_admin": false, 00:17:42.073 "nvme_io": false, 00:17:42.073 "nvme_io_md": false, 00:17:42.073 "write_zeroes": true, 00:17:42.073 "zcopy": true, 00:17:42.073 "get_zone_info": false, 00:17:42.073 "zone_management": false, 00:17:42.073 "zone_append": false, 00:17:42.073 "compare": false, 00:17:42.073 "compare_and_write": false, 00:17:42.073 "abort": true, 00:17:42.073 "seek_hole": false, 00:17:42.073 "seek_data": false, 00:17:42.073 "copy": true, 00:17:42.073 "nvme_iov_md": false 00:17:42.073 }, 00:17:42.073 "memory_domains": [ 00:17:42.073 { 00:17:42.073 "dma_device_id": "system", 00:17:42.073 "dma_device_type": 1 00:17:42.073 }, 00:17:42.073 { 00:17:42.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.073 "dma_device_type": 2 00:17:42.073 } 00:17:42.073 ], 00:17:42.073 "driver_specific": {} 00:17:42.073 } 00:17:42.073 ] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 [2024-10-11 09:51:26.658402] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.073 [2024-10-11 09:51:26.658450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.073 [2024-10-11 09:51:26.658489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.073 [2024-10-11 09:51:26.660455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.073 [2024-10-11 09:51:26.660511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.073 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.332 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.332 "name": "Existed_Raid", 00:17:42.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.332 "strip_size_kb": 64, 00:17:42.332 "state": "configuring", 00:17:42.332 "raid_level": "raid5f", 00:17:42.332 "superblock": false, 00:17:42.332 "num_base_bdevs": 4, 00:17:42.332 "num_base_bdevs_discovered": 3, 00:17:42.332 "num_base_bdevs_operational": 4, 00:17:42.332 "base_bdevs_list": [ 00:17:42.332 { 00:17:42.332 "name": "BaseBdev1", 00:17:42.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.332 "is_configured": false, 00:17:42.332 "data_offset": 0, 00:17:42.332 "data_size": 0 00:17:42.332 }, 00:17:42.332 { 00:17:42.332 "name": "BaseBdev2", 00:17:42.332 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:42.332 "is_configured": true, 00:17:42.332 "data_offset": 0, 00:17:42.332 "data_size": 65536 00:17:42.332 }, 00:17:42.332 { 00:17:42.332 "name": "BaseBdev3", 00:17:42.332 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:42.332 "is_configured": true, 00:17:42.332 "data_offset": 0, 00:17:42.332 "data_size": 65536 00:17:42.332 }, 00:17:42.332 { 00:17:42.332 "name": "BaseBdev4", 00:17:42.332 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:42.332 "is_configured": true, 00:17:42.332 "data_offset": 0, 00:17:42.332 "data_size": 65536 00:17:42.332 } 00:17:42.332 ] 00:17:42.332 }' 00:17:42.332 09:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.332 09:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.592 [2024-10-11 09:51:27.145590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.592 "name": "Existed_Raid", 00:17:42.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.592 "strip_size_kb": 64, 00:17:42.592 "state": "configuring", 00:17:42.592 "raid_level": "raid5f", 00:17:42.592 "superblock": false, 00:17:42.592 "num_base_bdevs": 4, 00:17:42.592 "num_base_bdevs_discovered": 2, 00:17:42.592 "num_base_bdevs_operational": 4, 00:17:42.592 "base_bdevs_list": [ 00:17:42.592 { 00:17:42.592 "name": "BaseBdev1", 00:17:42.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.592 "is_configured": false, 00:17:42.592 "data_offset": 0, 00:17:42.592 "data_size": 0 00:17:42.592 }, 00:17:42.592 { 00:17:42.592 "name": null, 00:17:42.592 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:42.592 "is_configured": false, 00:17:42.592 "data_offset": 0, 00:17:42.592 "data_size": 65536 00:17:42.592 }, 00:17:42.592 { 00:17:42.592 "name": "BaseBdev3", 00:17:42.592 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:42.592 "is_configured": true, 00:17:42.592 "data_offset": 0, 00:17:42.592 "data_size": 65536 00:17:42.592 }, 00:17:42.592 { 00:17:42.592 "name": "BaseBdev4", 00:17:42.592 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:42.592 "is_configured": true, 00:17:42.592 "data_offset": 0, 00:17:42.592 "data_size": 65536 00:17:42.592 } 00:17:42.592 ] 00:17:42.592 }' 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.592 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.162 [2024-10-11 09:51:27.732536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.162 BaseBdev1 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.162 [ 00:17:43.162 { 00:17:43.162 "name": "BaseBdev1", 00:17:43.162 "aliases": [ 00:17:43.162 "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e" 00:17:43.162 ], 00:17:43.162 "product_name": "Malloc disk", 00:17:43.162 "block_size": 512, 00:17:43.162 "num_blocks": 65536, 00:17:43.162 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:43.162 "assigned_rate_limits": { 00:17:43.162 "rw_ios_per_sec": 0, 00:17:43.162 "rw_mbytes_per_sec": 0, 00:17:43.162 "r_mbytes_per_sec": 0, 00:17:43.162 "w_mbytes_per_sec": 0 00:17:43.162 }, 00:17:43.162 "claimed": true, 00:17:43.162 "claim_type": "exclusive_write", 00:17:43.162 "zoned": false, 00:17:43.162 "supported_io_types": { 00:17:43.162 "read": true, 00:17:43.162 "write": true, 00:17:43.162 "unmap": true, 00:17:43.162 "flush": true, 00:17:43.162 "reset": true, 00:17:43.162 "nvme_admin": false, 00:17:43.162 "nvme_io": false, 00:17:43.162 "nvme_io_md": false, 00:17:43.162 "write_zeroes": true, 00:17:43.162 "zcopy": true, 00:17:43.162 "get_zone_info": false, 00:17:43.162 "zone_management": false, 00:17:43.162 "zone_append": false, 00:17:43.162 "compare": false, 00:17:43.162 "compare_and_write": false, 00:17:43.162 "abort": true, 00:17:43.162 "seek_hole": false, 00:17:43.162 "seek_data": false, 00:17:43.162 "copy": true, 00:17:43.162 "nvme_iov_md": false 00:17:43.162 }, 00:17:43.162 "memory_domains": [ 00:17:43.162 { 00:17:43.162 "dma_device_id": "system", 00:17:43.162 "dma_device_type": 1 00:17:43.162 }, 00:17:43.162 { 00:17:43.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.162 "dma_device_type": 2 00:17:43.162 } 00:17:43.162 ], 00:17:43.162 "driver_specific": {} 00:17:43.162 } 00:17:43.162 ] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.162 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.422 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.422 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.422 "name": "Existed_Raid", 00:17:43.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.422 "strip_size_kb": 64, 00:17:43.422 "state": "configuring", 00:17:43.422 "raid_level": "raid5f", 00:17:43.422 "superblock": false, 00:17:43.422 "num_base_bdevs": 4, 00:17:43.422 "num_base_bdevs_discovered": 3, 00:17:43.422 "num_base_bdevs_operational": 4, 00:17:43.422 "base_bdevs_list": [ 00:17:43.422 { 00:17:43.422 "name": "BaseBdev1", 00:17:43.422 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:43.422 "is_configured": true, 00:17:43.422 "data_offset": 0, 00:17:43.422 "data_size": 65536 00:17:43.422 }, 00:17:43.422 { 00:17:43.422 "name": null, 00:17:43.422 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:43.423 "is_configured": false, 00:17:43.423 "data_offset": 0, 00:17:43.423 "data_size": 65536 00:17:43.423 }, 00:17:43.423 { 00:17:43.423 "name": "BaseBdev3", 00:17:43.423 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:43.423 "is_configured": true, 00:17:43.423 "data_offset": 0, 00:17:43.423 "data_size": 65536 00:17:43.423 }, 00:17:43.423 { 00:17:43.423 "name": "BaseBdev4", 00:17:43.423 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:43.423 "is_configured": true, 00:17:43.423 "data_offset": 0, 00:17:43.423 "data_size": 65536 00:17:43.423 } 00:17:43.423 ] 00:17:43.423 }' 00:17:43.423 09:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.423 09:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.682 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.682 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.682 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.683 [2024-10-11 09:51:28.235819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.683 "name": "Existed_Raid", 00:17:43.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.683 "strip_size_kb": 64, 00:17:43.683 "state": "configuring", 00:17:43.683 "raid_level": "raid5f", 00:17:43.683 "superblock": false, 00:17:43.683 "num_base_bdevs": 4, 00:17:43.683 "num_base_bdevs_discovered": 2, 00:17:43.683 "num_base_bdevs_operational": 4, 00:17:43.683 "base_bdevs_list": [ 00:17:43.683 { 00:17:43.683 "name": "BaseBdev1", 00:17:43.683 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:43.683 "is_configured": true, 00:17:43.683 "data_offset": 0, 00:17:43.683 "data_size": 65536 00:17:43.683 }, 00:17:43.683 { 00:17:43.683 "name": null, 00:17:43.683 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:43.683 "is_configured": false, 00:17:43.683 "data_offset": 0, 00:17:43.683 "data_size": 65536 00:17:43.683 }, 00:17:43.683 { 00:17:43.683 "name": null, 00:17:43.683 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:43.683 "is_configured": false, 00:17:43.683 "data_offset": 0, 00:17:43.683 "data_size": 65536 00:17:43.683 }, 00:17:43.683 { 00:17:43.683 "name": "BaseBdev4", 00:17:43.683 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:43.683 "is_configured": true, 00:17:43.683 "data_offset": 0, 00:17:43.683 "data_size": 65536 00:17:43.683 } 00:17:43.683 ] 00:17:43.683 }' 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.683 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.253 [2024-10-11 09:51:28.766877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.253 "name": "Existed_Raid", 00:17:44.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.253 "strip_size_kb": 64, 00:17:44.253 "state": "configuring", 00:17:44.253 "raid_level": "raid5f", 00:17:44.253 "superblock": false, 00:17:44.253 "num_base_bdevs": 4, 00:17:44.253 "num_base_bdevs_discovered": 3, 00:17:44.253 "num_base_bdevs_operational": 4, 00:17:44.253 "base_bdevs_list": [ 00:17:44.253 { 00:17:44.253 "name": "BaseBdev1", 00:17:44.253 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:44.253 "is_configured": true, 00:17:44.253 "data_offset": 0, 00:17:44.253 "data_size": 65536 00:17:44.253 }, 00:17:44.253 { 00:17:44.253 "name": null, 00:17:44.253 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:44.253 "is_configured": false, 00:17:44.253 "data_offset": 0, 00:17:44.253 "data_size": 65536 00:17:44.253 }, 00:17:44.253 { 00:17:44.253 "name": "BaseBdev3", 00:17:44.253 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:44.253 "is_configured": true, 00:17:44.253 "data_offset": 0, 00:17:44.253 "data_size": 65536 00:17:44.253 }, 00:17:44.253 { 00:17:44.253 "name": "BaseBdev4", 00:17:44.253 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:44.253 "is_configured": true, 00:17:44.253 "data_offset": 0, 00:17:44.253 "data_size": 65536 00:17:44.253 } 00:17:44.253 ] 00:17:44.253 }' 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.253 09:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.821 [2024-10-11 09:51:29.274028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.821 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.822 "name": "Existed_Raid", 00:17:44.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.822 "strip_size_kb": 64, 00:17:44.822 "state": "configuring", 00:17:44.822 "raid_level": "raid5f", 00:17:44.822 "superblock": false, 00:17:44.822 "num_base_bdevs": 4, 00:17:44.822 "num_base_bdevs_discovered": 2, 00:17:44.822 "num_base_bdevs_operational": 4, 00:17:44.822 "base_bdevs_list": [ 00:17:44.822 { 00:17:44.822 "name": null, 00:17:44.822 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:44.822 "is_configured": false, 00:17:44.822 "data_offset": 0, 00:17:44.822 "data_size": 65536 00:17:44.822 }, 00:17:44.822 { 00:17:44.822 "name": null, 00:17:44.822 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:44.822 "is_configured": false, 00:17:44.822 "data_offset": 0, 00:17:44.822 "data_size": 65536 00:17:44.822 }, 00:17:44.822 { 00:17:44.822 "name": "BaseBdev3", 00:17:44.822 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:44.822 "is_configured": true, 00:17:44.822 "data_offset": 0, 00:17:44.822 "data_size": 65536 00:17:44.822 }, 00:17:44.822 { 00:17:44.822 "name": "BaseBdev4", 00:17:44.822 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:44.822 "is_configured": true, 00:17:44.822 "data_offset": 0, 00:17:44.822 "data_size": 65536 00:17:44.822 } 00:17:44.822 ] 00:17:44.822 }' 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.822 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.391 [2024-10-11 09:51:29.925229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.391 "name": "Existed_Raid", 00:17:45.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.391 "strip_size_kb": 64, 00:17:45.391 "state": "configuring", 00:17:45.391 "raid_level": "raid5f", 00:17:45.391 "superblock": false, 00:17:45.391 "num_base_bdevs": 4, 00:17:45.391 "num_base_bdevs_discovered": 3, 00:17:45.391 "num_base_bdevs_operational": 4, 00:17:45.391 "base_bdevs_list": [ 00:17:45.391 { 00:17:45.391 "name": null, 00:17:45.391 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:45.391 "is_configured": false, 00:17:45.391 "data_offset": 0, 00:17:45.391 "data_size": 65536 00:17:45.391 }, 00:17:45.391 { 00:17:45.391 "name": "BaseBdev2", 00:17:45.391 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:45.391 "is_configured": true, 00:17:45.391 "data_offset": 0, 00:17:45.391 "data_size": 65536 00:17:45.391 }, 00:17:45.391 { 00:17:45.391 "name": "BaseBdev3", 00:17:45.391 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:45.391 "is_configured": true, 00:17:45.391 "data_offset": 0, 00:17:45.391 "data_size": 65536 00:17:45.391 }, 00:17:45.391 { 00:17:45.391 "name": "BaseBdev4", 00:17:45.391 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:45.391 "is_configured": true, 00:17:45.391 "data_offset": 0, 00:17:45.391 "data_size": 65536 00:17:45.391 } 00:17:45.391 ] 00:17:45.391 }' 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.391 09:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cdc41b5f-b3df-4a0d-8cf2-c4589e09121e 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 [2024-10-11 09:51:30.527360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:45.961 [2024-10-11 09:51:30.527423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:45.961 [2024-10-11 09:51:30.527431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:45.961 [2024-10-11 09:51:30.527689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:45.961 [2024-10-11 09:51:30.535091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:45.961 [2024-10-11 09:51:30.535117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:45.961 [2024-10-11 09:51:30.535400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.961 NewBaseBdev 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 [ 00:17:45.961 { 00:17:45.961 "name": "NewBaseBdev", 00:17:45.961 "aliases": [ 00:17:45.961 "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e" 00:17:45.961 ], 00:17:45.961 "product_name": "Malloc disk", 00:17:45.961 "block_size": 512, 00:17:45.961 "num_blocks": 65536, 00:17:45.961 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:45.961 "assigned_rate_limits": { 00:17:45.961 "rw_ios_per_sec": 0, 00:17:45.961 "rw_mbytes_per_sec": 0, 00:17:45.961 "r_mbytes_per_sec": 0, 00:17:45.961 "w_mbytes_per_sec": 0 00:17:45.961 }, 00:17:45.961 "claimed": true, 00:17:45.961 "claim_type": "exclusive_write", 00:17:45.961 "zoned": false, 00:17:45.961 "supported_io_types": { 00:17:45.961 "read": true, 00:17:45.961 "write": true, 00:17:45.961 "unmap": true, 00:17:45.961 "flush": true, 00:17:45.961 "reset": true, 00:17:45.961 "nvme_admin": false, 00:17:45.961 "nvme_io": false, 00:17:45.961 "nvme_io_md": false, 00:17:45.961 "write_zeroes": true, 00:17:45.961 "zcopy": true, 00:17:45.961 "get_zone_info": false, 00:17:45.961 "zone_management": false, 00:17:45.961 "zone_append": false, 00:17:45.961 "compare": false, 00:17:45.961 "compare_and_write": false, 00:17:45.961 "abort": true, 00:17:45.961 "seek_hole": false, 00:17:45.961 "seek_data": false, 00:17:45.961 "copy": true, 00:17:45.961 "nvme_iov_md": false 00:17:45.961 }, 00:17:45.961 "memory_domains": [ 00:17:45.961 { 00:17:45.961 "dma_device_id": "system", 00:17:45.961 "dma_device_type": 1 00:17:45.961 }, 00:17:45.961 { 00:17:45.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.961 "dma_device_type": 2 00:17:45.961 } 00:17:45.961 ], 00:17:45.961 "driver_specific": {} 00:17:45.961 } 00:17:45.961 ] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.961 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.221 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.221 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.221 "name": "Existed_Raid", 00:17:46.221 "uuid": "d6f04f7e-6c79-40a9-a499-26ae6cce5424", 00:17:46.221 "strip_size_kb": 64, 00:17:46.221 "state": "online", 00:17:46.221 "raid_level": "raid5f", 00:17:46.221 "superblock": false, 00:17:46.221 "num_base_bdevs": 4, 00:17:46.221 "num_base_bdevs_discovered": 4, 00:17:46.221 "num_base_bdevs_operational": 4, 00:17:46.221 "base_bdevs_list": [ 00:17:46.221 { 00:17:46.221 "name": "NewBaseBdev", 00:17:46.221 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:46.221 "is_configured": true, 00:17:46.221 "data_offset": 0, 00:17:46.221 "data_size": 65536 00:17:46.221 }, 00:17:46.221 { 00:17:46.221 "name": "BaseBdev2", 00:17:46.221 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:46.221 "is_configured": true, 00:17:46.221 "data_offset": 0, 00:17:46.221 "data_size": 65536 00:17:46.221 }, 00:17:46.221 { 00:17:46.221 "name": "BaseBdev3", 00:17:46.221 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:46.221 "is_configured": true, 00:17:46.221 "data_offset": 0, 00:17:46.221 "data_size": 65536 00:17:46.221 }, 00:17:46.221 { 00:17:46.221 "name": "BaseBdev4", 00:17:46.221 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:46.221 "is_configured": true, 00:17:46.221 "data_offset": 0, 00:17:46.221 "data_size": 65536 00:17:46.221 } 00:17:46.221 ] 00:17:46.221 }' 00:17:46.221 09:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.221 09:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.481 [2024-10-11 09:51:31.058512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.481 "name": "Existed_Raid", 00:17:46.481 "aliases": [ 00:17:46.481 "d6f04f7e-6c79-40a9-a499-26ae6cce5424" 00:17:46.481 ], 00:17:46.481 "product_name": "Raid Volume", 00:17:46.481 "block_size": 512, 00:17:46.481 "num_blocks": 196608, 00:17:46.481 "uuid": "d6f04f7e-6c79-40a9-a499-26ae6cce5424", 00:17:46.481 "assigned_rate_limits": { 00:17:46.481 "rw_ios_per_sec": 0, 00:17:46.481 "rw_mbytes_per_sec": 0, 00:17:46.481 "r_mbytes_per_sec": 0, 00:17:46.481 "w_mbytes_per_sec": 0 00:17:46.481 }, 00:17:46.481 "claimed": false, 00:17:46.481 "zoned": false, 00:17:46.481 "supported_io_types": { 00:17:46.481 "read": true, 00:17:46.481 "write": true, 00:17:46.481 "unmap": false, 00:17:46.481 "flush": false, 00:17:46.481 "reset": true, 00:17:46.481 "nvme_admin": false, 00:17:46.481 "nvme_io": false, 00:17:46.481 "nvme_io_md": false, 00:17:46.481 "write_zeroes": true, 00:17:46.481 "zcopy": false, 00:17:46.481 "get_zone_info": false, 00:17:46.481 "zone_management": false, 00:17:46.481 "zone_append": false, 00:17:46.481 "compare": false, 00:17:46.481 "compare_and_write": false, 00:17:46.481 "abort": false, 00:17:46.481 "seek_hole": false, 00:17:46.481 "seek_data": false, 00:17:46.481 "copy": false, 00:17:46.481 "nvme_iov_md": false 00:17:46.481 }, 00:17:46.481 "driver_specific": { 00:17:46.481 "raid": { 00:17:46.481 "uuid": "d6f04f7e-6c79-40a9-a499-26ae6cce5424", 00:17:46.481 "strip_size_kb": 64, 00:17:46.481 "state": "online", 00:17:46.481 "raid_level": "raid5f", 00:17:46.481 "superblock": false, 00:17:46.481 "num_base_bdevs": 4, 00:17:46.481 "num_base_bdevs_discovered": 4, 00:17:46.481 "num_base_bdevs_operational": 4, 00:17:46.481 "base_bdevs_list": [ 00:17:46.481 { 00:17:46.481 "name": "NewBaseBdev", 00:17:46.481 "uuid": "cdc41b5f-b3df-4a0d-8cf2-c4589e09121e", 00:17:46.481 "is_configured": true, 00:17:46.481 "data_offset": 0, 00:17:46.481 "data_size": 65536 00:17:46.481 }, 00:17:46.481 { 00:17:46.481 "name": "BaseBdev2", 00:17:46.481 "uuid": "c48b8941-7863-4aa8-a81e-e69f1160a4e0", 00:17:46.481 "is_configured": true, 00:17:46.481 "data_offset": 0, 00:17:46.481 "data_size": 65536 00:17:46.481 }, 00:17:46.481 { 00:17:46.481 "name": "BaseBdev3", 00:17:46.481 "uuid": "102be06b-dc5c-47b8-813f-cec2ffa804ef", 00:17:46.481 "is_configured": true, 00:17:46.481 "data_offset": 0, 00:17:46.481 "data_size": 65536 00:17:46.481 }, 00:17:46.481 { 00:17:46.481 "name": "BaseBdev4", 00:17:46.481 "uuid": "5fa211d4-335d-4881-8267-d10ca00278d2", 00:17:46.481 "is_configured": true, 00:17:46.481 "data_offset": 0, 00:17:46.481 "data_size": 65536 00:17:46.481 } 00:17:46.481 ] 00:17:46.481 } 00:17:46.481 } 00:17:46.481 }' 00:17:46.481 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:46.740 BaseBdev2 00:17:46.740 BaseBdev3 00:17:46.740 BaseBdev4' 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.740 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.741 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.741 [2024-10-11 09:51:31.369755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.741 [2024-10-11 09:51:31.369801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.741 [2024-10-11 09:51:31.369894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.741 [2024-10-11 09:51:31.370250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.741 [2024-10-11 09:51:31.370272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83344 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83344 ']' 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83344 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83344 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.000 killing process with pid 83344 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83344' 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83344 00:17:47.000 [2024-10-11 09:51:31.417704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.000 09:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83344 00:17:47.260 [2024-10-11 09:51:31.831068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:48.688 00:17:48.688 real 0m12.064s 00:17:48.688 user 0m19.226s 00:17:48.688 sys 0m2.266s 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.688 ************************************ 00:17:48.688 END TEST raid5f_state_function_test 00:17:48.688 ************************************ 00:17:48.688 09:51:32 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:48.688 09:51:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:48.688 09:51:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.688 09:51:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.688 ************************************ 00:17:48.688 START TEST raid5f_state_function_test_sb 00:17:48.688 ************************************ 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:48.688 09:51:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84021 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:48.688 Process raid pid: 84021 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84021' 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84021 00:17:48.688 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84021 ']' 00:17:48.689 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.689 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.689 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.689 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.689 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.689 [2024-10-11 09:51:33.095469] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:17:48.689 [2024-10-11 09:51:33.096152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.689 [2024-10-11 09:51:33.258759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.979 [2024-10-11 09:51:33.384919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.237 [2024-10-11 09:51:33.612975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.237 [2024-10-11 09:51:33.613010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.496 [2024-10-11 09:51:33.968723] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.496 [2024-10-11 09:51:33.968960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.496 [2024-10-11 09:51:33.968976] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.496 [2024-10-11 09:51:33.969036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.496 [2024-10-11 09:51:33.969046] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.496 [2024-10-11 09:51:33.969100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.496 [2024-10-11 09:51:33.969112] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:49.496 [2024-10-11 09:51:33.969196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.496 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.497 09:51:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.497 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.497 09:51:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.497 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.497 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.497 "name": "Existed_Raid", 00:17:49.497 "uuid": "ab11101a-1b66-4598-8b75-e6ae54861861", 00:17:49.497 "strip_size_kb": 64, 00:17:49.497 "state": "configuring", 00:17:49.497 "raid_level": "raid5f", 00:17:49.497 "superblock": true, 00:17:49.497 "num_base_bdevs": 4, 00:17:49.497 "num_base_bdevs_discovered": 0, 00:17:49.497 "num_base_bdevs_operational": 4, 00:17:49.497 "base_bdevs_list": [ 00:17:49.497 { 00:17:49.497 "name": "BaseBdev1", 00:17:49.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.497 "is_configured": false, 00:17:49.497 "data_offset": 0, 00:17:49.497 "data_size": 0 00:17:49.497 }, 00:17:49.497 { 00:17:49.497 "name": "BaseBdev2", 00:17:49.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.497 "is_configured": false, 00:17:49.497 "data_offset": 0, 00:17:49.497 "data_size": 0 00:17:49.497 }, 00:17:49.497 { 00:17:49.497 "name": "BaseBdev3", 00:17:49.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.497 "is_configured": false, 00:17:49.497 "data_offset": 0, 00:17:49.497 "data_size": 0 00:17:49.497 }, 00:17:49.497 { 00:17:49.497 "name": "BaseBdev4", 00:17:49.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.497 "is_configured": false, 00:17:49.497 "data_offset": 0, 00:17:49.497 "data_size": 0 00:17:49.497 } 00:17:49.497 ] 00:17:49.497 }' 00:17:49.497 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.497 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 [2024-10-11 09:51:34.411863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.066 [2024-10-11 09:51:34.411960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 [2024-10-11 09:51:34.423877] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.066 [2024-10-11 09:51:34.424288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.066 [2024-10-11 09:51:34.424343] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.066 [2024-10-11 09:51:34.424446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.066 [2024-10-11 09:51:34.424491] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.066 [2024-10-11 09:51:34.424565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.066 [2024-10-11 09:51:34.424600] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:50.066 [2024-10-11 09:51:34.424667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 [2024-10-11 09:51:34.473495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.066 BaseBdev1 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.066 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.066 [ 00:17:50.066 { 00:17:50.066 "name": "BaseBdev1", 00:17:50.066 "aliases": [ 00:17:50.066 "d37ce29c-fcfc-4855-b047-57acadde3be4" 00:17:50.066 ], 00:17:50.067 "product_name": "Malloc disk", 00:17:50.067 "block_size": 512, 00:17:50.067 "num_blocks": 65536, 00:17:50.067 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:50.067 "assigned_rate_limits": { 00:17:50.067 "rw_ios_per_sec": 0, 00:17:50.067 "rw_mbytes_per_sec": 0, 00:17:50.067 "r_mbytes_per_sec": 0, 00:17:50.067 "w_mbytes_per_sec": 0 00:17:50.067 }, 00:17:50.067 "claimed": true, 00:17:50.067 "claim_type": "exclusive_write", 00:17:50.067 "zoned": false, 00:17:50.067 "supported_io_types": { 00:17:50.067 "read": true, 00:17:50.067 "write": true, 00:17:50.067 "unmap": true, 00:17:50.067 "flush": true, 00:17:50.067 "reset": true, 00:17:50.067 "nvme_admin": false, 00:17:50.067 "nvme_io": false, 00:17:50.067 "nvme_io_md": false, 00:17:50.067 "write_zeroes": true, 00:17:50.067 "zcopy": true, 00:17:50.067 "get_zone_info": false, 00:17:50.067 "zone_management": false, 00:17:50.067 "zone_append": false, 00:17:50.067 "compare": false, 00:17:50.067 "compare_and_write": false, 00:17:50.067 "abort": true, 00:17:50.067 "seek_hole": false, 00:17:50.067 "seek_data": false, 00:17:50.067 "copy": true, 00:17:50.067 "nvme_iov_md": false 00:17:50.067 }, 00:17:50.067 "memory_domains": [ 00:17:50.067 { 00:17:50.067 "dma_device_id": "system", 00:17:50.067 "dma_device_type": 1 00:17:50.067 }, 00:17:50.067 { 00:17:50.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.067 "dma_device_type": 2 00:17:50.067 } 00:17:50.067 ], 00:17:50.067 "driver_specific": {} 00:17:50.067 } 00:17:50.067 ] 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.067 "name": "Existed_Raid", 00:17:50.067 "uuid": "c5ac3f03-a5eb-491a-a188-0227113edbd2", 00:17:50.067 "strip_size_kb": 64, 00:17:50.067 "state": "configuring", 00:17:50.067 "raid_level": "raid5f", 00:17:50.067 "superblock": true, 00:17:50.067 "num_base_bdevs": 4, 00:17:50.067 "num_base_bdevs_discovered": 1, 00:17:50.067 "num_base_bdevs_operational": 4, 00:17:50.067 "base_bdevs_list": [ 00:17:50.067 { 00:17:50.067 "name": "BaseBdev1", 00:17:50.067 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:50.067 "is_configured": true, 00:17:50.067 "data_offset": 2048, 00:17:50.067 "data_size": 63488 00:17:50.067 }, 00:17:50.067 { 00:17:50.067 "name": "BaseBdev2", 00:17:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.067 "is_configured": false, 00:17:50.067 "data_offset": 0, 00:17:50.067 "data_size": 0 00:17:50.067 }, 00:17:50.067 { 00:17:50.067 "name": "BaseBdev3", 00:17:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.067 "is_configured": false, 00:17:50.067 "data_offset": 0, 00:17:50.067 "data_size": 0 00:17:50.067 }, 00:17:50.067 { 00:17:50.067 "name": "BaseBdev4", 00:17:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.067 "is_configured": false, 00:17:50.067 "data_offset": 0, 00:17:50.067 "data_size": 0 00:17:50.067 } 00:17:50.067 ] 00:17:50.067 }' 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.067 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.327 [2024-10-11 09:51:34.916830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.327 [2024-10-11 09:51:34.916889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.327 [2024-10-11 09:51:34.924871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.327 [2024-10-11 09:51:34.926837] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.327 [2024-10-11 09:51:34.927010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.327 [2024-10-11 09:51:34.927023] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.327 [2024-10-11 09:51:34.927069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.327 [2024-10-11 09:51:34.927077] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:50.327 [2024-10-11 09:51:34.927152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.327 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.587 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.587 "name": "Existed_Raid", 00:17:50.587 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:50.587 "strip_size_kb": 64, 00:17:50.587 "state": "configuring", 00:17:50.587 "raid_level": "raid5f", 00:17:50.587 "superblock": true, 00:17:50.587 "num_base_bdevs": 4, 00:17:50.587 "num_base_bdevs_discovered": 1, 00:17:50.587 "num_base_bdevs_operational": 4, 00:17:50.587 "base_bdevs_list": [ 00:17:50.587 { 00:17:50.587 "name": "BaseBdev1", 00:17:50.587 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:50.587 "is_configured": true, 00:17:50.587 "data_offset": 2048, 00:17:50.587 "data_size": 63488 00:17:50.587 }, 00:17:50.587 { 00:17:50.587 "name": "BaseBdev2", 00:17:50.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.587 "is_configured": false, 00:17:50.587 "data_offset": 0, 00:17:50.587 "data_size": 0 00:17:50.587 }, 00:17:50.587 { 00:17:50.587 "name": "BaseBdev3", 00:17:50.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.587 "is_configured": false, 00:17:50.587 "data_offset": 0, 00:17:50.587 "data_size": 0 00:17:50.587 }, 00:17:50.587 { 00:17:50.587 "name": "BaseBdev4", 00:17:50.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.587 "is_configured": false, 00:17:50.587 "data_offset": 0, 00:17:50.587 "data_size": 0 00:17:50.587 } 00:17:50.587 ] 00:17:50.587 }' 00:17:50.587 09:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.587 09:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.846 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:50.846 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.846 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.846 [2024-10-11 09:51:35.433124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.846 BaseBdev2 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.847 [ 00:17:50.847 { 00:17:50.847 "name": "BaseBdev2", 00:17:50.847 "aliases": [ 00:17:50.847 "480c032f-1d6c-401c-825c-2a5a2845195c" 00:17:50.847 ], 00:17:50.847 "product_name": "Malloc disk", 00:17:50.847 "block_size": 512, 00:17:50.847 "num_blocks": 65536, 00:17:50.847 "uuid": "480c032f-1d6c-401c-825c-2a5a2845195c", 00:17:50.847 "assigned_rate_limits": { 00:17:50.847 "rw_ios_per_sec": 0, 00:17:50.847 "rw_mbytes_per_sec": 0, 00:17:50.847 "r_mbytes_per_sec": 0, 00:17:50.847 "w_mbytes_per_sec": 0 00:17:50.847 }, 00:17:50.847 "claimed": true, 00:17:50.847 "claim_type": "exclusive_write", 00:17:50.847 "zoned": false, 00:17:50.847 "supported_io_types": { 00:17:50.847 "read": true, 00:17:50.847 "write": true, 00:17:50.847 "unmap": true, 00:17:50.847 "flush": true, 00:17:50.847 "reset": true, 00:17:50.847 "nvme_admin": false, 00:17:50.847 "nvme_io": false, 00:17:50.847 "nvme_io_md": false, 00:17:50.847 "write_zeroes": true, 00:17:50.847 "zcopy": true, 00:17:50.847 "get_zone_info": false, 00:17:50.847 "zone_management": false, 00:17:50.847 "zone_append": false, 00:17:50.847 "compare": false, 00:17:50.847 "compare_and_write": false, 00:17:50.847 "abort": true, 00:17:50.847 "seek_hole": false, 00:17:50.847 "seek_data": false, 00:17:50.847 "copy": true, 00:17:50.847 "nvme_iov_md": false 00:17:50.847 }, 00:17:50.847 "memory_domains": [ 00:17:50.847 { 00:17:50.847 "dma_device_id": "system", 00:17:50.847 "dma_device_type": 1 00:17:50.847 }, 00:17:50.847 { 00:17:50.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.847 "dma_device_type": 2 00:17:50.847 } 00:17:50.847 ], 00:17:50.847 "driver_specific": {} 00:17:50.847 } 00:17:50.847 ] 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.847 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.107 "name": "Existed_Raid", 00:17:51.107 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:51.107 "strip_size_kb": 64, 00:17:51.107 "state": "configuring", 00:17:51.107 "raid_level": "raid5f", 00:17:51.107 "superblock": true, 00:17:51.107 "num_base_bdevs": 4, 00:17:51.107 "num_base_bdevs_discovered": 2, 00:17:51.107 "num_base_bdevs_operational": 4, 00:17:51.107 "base_bdevs_list": [ 00:17:51.107 { 00:17:51.107 "name": "BaseBdev1", 00:17:51.107 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:51.107 "is_configured": true, 00:17:51.107 "data_offset": 2048, 00:17:51.107 "data_size": 63488 00:17:51.107 }, 00:17:51.107 { 00:17:51.107 "name": "BaseBdev2", 00:17:51.107 "uuid": "480c032f-1d6c-401c-825c-2a5a2845195c", 00:17:51.107 "is_configured": true, 00:17:51.107 "data_offset": 2048, 00:17:51.107 "data_size": 63488 00:17:51.107 }, 00:17:51.107 { 00:17:51.107 "name": "BaseBdev3", 00:17:51.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.107 "is_configured": false, 00:17:51.107 "data_offset": 0, 00:17:51.107 "data_size": 0 00:17:51.107 }, 00:17:51.107 { 00:17:51.107 "name": "BaseBdev4", 00:17:51.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.107 "is_configured": false, 00:17:51.107 "data_offset": 0, 00:17:51.107 "data_size": 0 00:17:51.107 } 00:17:51.107 ] 00:17:51.107 }' 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.107 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.371 09:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.371 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.371 09:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 [2024-10-11 09:51:36.016107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.635 BaseBdev3 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 [ 00:17:51.635 { 00:17:51.635 "name": "BaseBdev3", 00:17:51.635 "aliases": [ 00:17:51.635 "fb7cb907-a662-4b81-aa33-8a09c88565ba" 00:17:51.635 ], 00:17:51.635 "product_name": "Malloc disk", 00:17:51.635 "block_size": 512, 00:17:51.635 "num_blocks": 65536, 00:17:51.635 "uuid": "fb7cb907-a662-4b81-aa33-8a09c88565ba", 00:17:51.635 "assigned_rate_limits": { 00:17:51.635 "rw_ios_per_sec": 0, 00:17:51.635 "rw_mbytes_per_sec": 0, 00:17:51.635 "r_mbytes_per_sec": 0, 00:17:51.635 "w_mbytes_per_sec": 0 00:17:51.635 }, 00:17:51.635 "claimed": true, 00:17:51.635 "claim_type": "exclusive_write", 00:17:51.635 "zoned": false, 00:17:51.635 "supported_io_types": { 00:17:51.635 "read": true, 00:17:51.635 "write": true, 00:17:51.635 "unmap": true, 00:17:51.635 "flush": true, 00:17:51.635 "reset": true, 00:17:51.635 "nvme_admin": false, 00:17:51.635 "nvme_io": false, 00:17:51.635 "nvme_io_md": false, 00:17:51.635 "write_zeroes": true, 00:17:51.635 "zcopy": true, 00:17:51.635 "get_zone_info": false, 00:17:51.635 "zone_management": false, 00:17:51.635 "zone_append": false, 00:17:51.635 "compare": false, 00:17:51.635 "compare_and_write": false, 00:17:51.635 "abort": true, 00:17:51.635 "seek_hole": false, 00:17:51.635 "seek_data": false, 00:17:51.635 "copy": true, 00:17:51.635 "nvme_iov_md": false 00:17:51.635 }, 00:17:51.635 "memory_domains": [ 00:17:51.635 { 00:17:51.635 "dma_device_id": "system", 00:17:51.635 "dma_device_type": 1 00:17:51.635 }, 00:17:51.635 { 00:17:51.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.635 "dma_device_type": 2 00:17:51.635 } 00:17:51.635 ], 00:17:51.635 "driver_specific": {} 00:17:51.635 } 00:17:51.635 ] 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.635 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.635 "name": "Existed_Raid", 00:17:51.635 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:51.635 "strip_size_kb": 64, 00:17:51.635 "state": "configuring", 00:17:51.635 "raid_level": "raid5f", 00:17:51.635 "superblock": true, 00:17:51.635 "num_base_bdevs": 4, 00:17:51.635 "num_base_bdevs_discovered": 3, 00:17:51.635 "num_base_bdevs_operational": 4, 00:17:51.635 "base_bdevs_list": [ 00:17:51.635 { 00:17:51.635 "name": "BaseBdev1", 00:17:51.635 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:51.635 "is_configured": true, 00:17:51.636 "data_offset": 2048, 00:17:51.636 "data_size": 63488 00:17:51.636 }, 00:17:51.636 { 00:17:51.636 "name": "BaseBdev2", 00:17:51.636 "uuid": "480c032f-1d6c-401c-825c-2a5a2845195c", 00:17:51.636 "is_configured": true, 00:17:51.636 "data_offset": 2048, 00:17:51.636 "data_size": 63488 00:17:51.636 }, 00:17:51.636 { 00:17:51.636 "name": "BaseBdev3", 00:17:51.636 "uuid": "fb7cb907-a662-4b81-aa33-8a09c88565ba", 00:17:51.636 "is_configured": true, 00:17:51.636 "data_offset": 2048, 00:17:51.636 "data_size": 63488 00:17:51.636 }, 00:17:51.636 { 00:17:51.636 "name": "BaseBdev4", 00:17:51.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.636 "is_configured": false, 00:17:51.636 "data_offset": 0, 00:17:51.636 "data_size": 0 00:17:51.636 } 00:17:51.636 ] 00:17:51.636 }' 00:17:51.636 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.636 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.896 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.896 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.896 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.157 [2024-10-11 09:51:36.541718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:52.157 [2024-10-11 09:51:36.542135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:52.157 [2024-10-11 09:51:36.542191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:52.157 [2024-10-11 09:51:36.542485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:52.157 BaseBdev4 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.157 [2024-10-11 09:51:36.551303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:52.157 [2024-10-11 09:51:36.551366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:52.157 [2024-10-11 09:51:36.551668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.157 [ 00:17:52.157 { 00:17:52.157 "name": "BaseBdev4", 00:17:52.157 "aliases": [ 00:17:52.157 "74be8355-068b-4792-baae-1cb7eb1f8ab8" 00:17:52.157 ], 00:17:52.157 "product_name": "Malloc disk", 00:17:52.157 "block_size": 512, 00:17:52.157 "num_blocks": 65536, 00:17:52.157 "uuid": "74be8355-068b-4792-baae-1cb7eb1f8ab8", 00:17:52.157 "assigned_rate_limits": { 00:17:52.157 "rw_ios_per_sec": 0, 00:17:52.157 "rw_mbytes_per_sec": 0, 00:17:52.157 "r_mbytes_per_sec": 0, 00:17:52.157 "w_mbytes_per_sec": 0 00:17:52.157 }, 00:17:52.157 "claimed": true, 00:17:52.157 "claim_type": "exclusive_write", 00:17:52.157 "zoned": false, 00:17:52.157 "supported_io_types": { 00:17:52.157 "read": true, 00:17:52.157 "write": true, 00:17:52.157 "unmap": true, 00:17:52.157 "flush": true, 00:17:52.157 "reset": true, 00:17:52.157 "nvme_admin": false, 00:17:52.157 "nvme_io": false, 00:17:52.157 "nvme_io_md": false, 00:17:52.157 "write_zeroes": true, 00:17:52.157 "zcopy": true, 00:17:52.157 "get_zone_info": false, 00:17:52.157 "zone_management": false, 00:17:52.157 "zone_append": false, 00:17:52.157 "compare": false, 00:17:52.157 "compare_and_write": false, 00:17:52.157 "abort": true, 00:17:52.157 "seek_hole": false, 00:17:52.157 "seek_data": false, 00:17:52.157 "copy": true, 00:17:52.157 "nvme_iov_md": false 00:17:52.157 }, 00:17:52.157 "memory_domains": [ 00:17:52.157 { 00:17:52.157 "dma_device_id": "system", 00:17:52.157 "dma_device_type": 1 00:17:52.157 }, 00:17:52.157 { 00:17:52.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.157 "dma_device_type": 2 00:17:52.157 } 00:17:52.157 ], 00:17:52.157 "driver_specific": {} 00:17:52.157 } 00:17:52.157 ] 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.157 "name": "Existed_Raid", 00:17:52.157 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:52.157 "strip_size_kb": 64, 00:17:52.157 "state": "online", 00:17:52.157 "raid_level": "raid5f", 00:17:52.157 "superblock": true, 00:17:52.157 "num_base_bdevs": 4, 00:17:52.157 "num_base_bdevs_discovered": 4, 00:17:52.157 "num_base_bdevs_operational": 4, 00:17:52.157 "base_bdevs_list": [ 00:17:52.157 { 00:17:52.157 "name": "BaseBdev1", 00:17:52.157 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:52.157 "is_configured": true, 00:17:52.157 "data_offset": 2048, 00:17:52.157 "data_size": 63488 00:17:52.157 }, 00:17:52.157 { 00:17:52.157 "name": "BaseBdev2", 00:17:52.157 "uuid": "480c032f-1d6c-401c-825c-2a5a2845195c", 00:17:52.157 "is_configured": true, 00:17:52.157 "data_offset": 2048, 00:17:52.157 "data_size": 63488 00:17:52.157 }, 00:17:52.157 { 00:17:52.157 "name": "BaseBdev3", 00:17:52.157 "uuid": "fb7cb907-a662-4b81-aa33-8a09c88565ba", 00:17:52.157 "is_configured": true, 00:17:52.157 "data_offset": 2048, 00:17:52.157 "data_size": 63488 00:17:52.157 }, 00:17:52.157 { 00:17:52.157 "name": "BaseBdev4", 00:17:52.157 "uuid": "74be8355-068b-4792-baae-1cb7eb1f8ab8", 00:17:52.157 "is_configured": true, 00:17:52.157 "data_offset": 2048, 00:17:52.157 "data_size": 63488 00:17:52.157 } 00:17:52.157 ] 00:17:52.157 }' 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.157 09:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.725 [2024-10-11 09:51:37.086808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.725 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.725 "name": "Existed_Raid", 00:17:52.725 "aliases": [ 00:17:52.725 "5d875cf0-c61a-4fbf-b1f3-a421f82dba14" 00:17:52.725 ], 00:17:52.725 "product_name": "Raid Volume", 00:17:52.725 "block_size": 512, 00:17:52.725 "num_blocks": 190464, 00:17:52.725 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:52.725 "assigned_rate_limits": { 00:17:52.725 "rw_ios_per_sec": 0, 00:17:52.725 "rw_mbytes_per_sec": 0, 00:17:52.725 "r_mbytes_per_sec": 0, 00:17:52.725 "w_mbytes_per_sec": 0 00:17:52.725 }, 00:17:52.725 "claimed": false, 00:17:52.725 "zoned": false, 00:17:52.725 "supported_io_types": { 00:17:52.725 "read": true, 00:17:52.725 "write": true, 00:17:52.725 "unmap": false, 00:17:52.725 "flush": false, 00:17:52.725 "reset": true, 00:17:52.725 "nvme_admin": false, 00:17:52.725 "nvme_io": false, 00:17:52.725 "nvme_io_md": false, 00:17:52.725 "write_zeroes": true, 00:17:52.725 "zcopy": false, 00:17:52.725 "get_zone_info": false, 00:17:52.725 "zone_management": false, 00:17:52.725 "zone_append": false, 00:17:52.725 "compare": false, 00:17:52.725 "compare_and_write": false, 00:17:52.725 "abort": false, 00:17:52.725 "seek_hole": false, 00:17:52.725 "seek_data": false, 00:17:52.725 "copy": false, 00:17:52.725 "nvme_iov_md": false 00:17:52.725 }, 00:17:52.726 "driver_specific": { 00:17:52.726 "raid": { 00:17:52.726 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:52.726 "strip_size_kb": 64, 00:17:52.726 "state": "online", 00:17:52.726 "raid_level": "raid5f", 00:17:52.726 "superblock": true, 00:17:52.726 "num_base_bdevs": 4, 00:17:52.726 "num_base_bdevs_discovered": 4, 00:17:52.726 "num_base_bdevs_operational": 4, 00:17:52.726 "base_bdevs_list": [ 00:17:52.726 { 00:17:52.726 "name": "BaseBdev1", 00:17:52.726 "uuid": "d37ce29c-fcfc-4855-b047-57acadde3be4", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 2048, 00:17:52.726 "data_size": 63488 00:17:52.726 }, 00:17:52.726 { 00:17:52.726 "name": "BaseBdev2", 00:17:52.726 "uuid": "480c032f-1d6c-401c-825c-2a5a2845195c", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 2048, 00:17:52.726 "data_size": 63488 00:17:52.726 }, 00:17:52.726 { 00:17:52.726 "name": "BaseBdev3", 00:17:52.726 "uuid": "fb7cb907-a662-4b81-aa33-8a09c88565ba", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 2048, 00:17:52.726 "data_size": 63488 00:17:52.726 }, 00:17:52.726 { 00:17:52.726 "name": "BaseBdev4", 00:17:52.726 "uuid": "74be8355-068b-4792-baae-1cb7eb1f8ab8", 00:17:52.726 "is_configured": true, 00:17:52.726 "data_offset": 2048, 00:17:52.726 "data_size": 63488 00:17:52.726 } 00:17:52.726 ] 00:17:52.726 } 00:17:52.726 } 00:17:52.726 }' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:52.726 BaseBdev2 00:17:52.726 BaseBdev3 00:17:52.726 BaseBdev4' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.726 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.985 [2024-10-11 09:51:37.394041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.985 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.986 "name": "Existed_Raid", 00:17:52.986 "uuid": "5d875cf0-c61a-4fbf-b1f3-a421f82dba14", 00:17:52.986 "strip_size_kb": 64, 00:17:52.986 "state": "online", 00:17:52.986 "raid_level": "raid5f", 00:17:52.986 "superblock": true, 00:17:52.986 "num_base_bdevs": 4, 00:17:52.986 "num_base_bdevs_discovered": 3, 00:17:52.986 "num_base_bdevs_operational": 3, 00:17:52.986 "base_bdevs_list": [ 00:17:52.986 { 00:17:52.986 "name": null, 00:17:52.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.986 "is_configured": false, 00:17:52.986 "data_offset": 0, 00:17:52.986 "data_size": 63488 00:17:52.986 }, 00:17:52.986 { 00:17:52.986 "name": "BaseBdev2", 00:17:52.986 "uuid": "480c032f-1d6c-401c-825c-2a5a2845195c", 00:17:52.986 "is_configured": true, 00:17:52.986 "data_offset": 2048, 00:17:52.986 "data_size": 63488 00:17:52.986 }, 00:17:52.986 { 00:17:52.986 "name": "BaseBdev3", 00:17:52.986 "uuid": "fb7cb907-a662-4b81-aa33-8a09c88565ba", 00:17:52.986 "is_configured": true, 00:17:52.986 "data_offset": 2048, 00:17:52.986 "data_size": 63488 00:17:52.986 }, 00:17:52.986 { 00:17:52.986 "name": "BaseBdev4", 00:17:52.986 "uuid": "74be8355-068b-4792-baae-1cb7eb1f8ab8", 00:17:52.986 "is_configured": true, 00:17:52.986 "data_offset": 2048, 00:17:52.986 "data_size": 63488 00:17:52.986 } 00:17:52.986 ] 00:17:52.986 }' 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.986 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.555 09:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.555 [2024-10-11 09:51:37.951027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:53.555 [2024-10-11 09:51:37.951234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.555 [2024-10-11 09:51:38.040134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.555 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.555 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:53.555 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:53.555 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.555 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.556 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.556 [2024-10-11 09:51:38.104070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.815 [2024-10-11 09:51:38.255483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:53.815 [2024-10-11 09:51:38.255540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.815 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 BaseBdev2 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 [ 00:17:54.075 { 00:17:54.075 "name": "BaseBdev2", 00:17:54.075 "aliases": [ 00:17:54.075 "c69d97af-172e-4595-b94c-e2afc63523a1" 00:17:54.075 ], 00:17:54.075 "product_name": "Malloc disk", 00:17:54.075 "block_size": 512, 00:17:54.075 "num_blocks": 65536, 00:17:54.075 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:54.075 "assigned_rate_limits": { 00:17:54.075 "rw_ios_per_sec": 0, 00:17:54.075 "rw_mbytes_per_sec": 0, 00:17:54.075 "r_mbytes_per_sec": 0, 00:17:54.075 "w_mbytes_per_sec": 0 00:17:54.075 }, 00:17:54.075 "claimed": false, 00:17:54.075 "zoned": false, 00:17:54.075 "supported_io_types": { 00:17:54.075 "read": true, 00:17:54.075 "write": true, 00:17:54.075 "unmap": true, 00:17:54.075 "flush": true, 00:17:54.075 "reset": true, 00:17:54.075 "nvme_admin": false, 00:17:54.075 "nvme_io": false, 00:17:54.075 "nvme_io_md": false, 00:17:54.075 "write_zeroes": true, 00:17:54.075 "zcopy": true, 00:17:54.075 "get_zone_info": false, 00:17:54.075 "zone_management": false, 00:17:54.075 "zone_append": false, 00:17:54.075 "compare": false, 00:17:54.075 "compare_and_write": false, 00:17:54.075 "abort": true, 00:17:54.075 "seek_hole": false, 00:17:54.075 "seek_data": false, 00:17:54.075 "copy": true, 00:17:54.075 "nvme_iov_md": false 00:17:54.075 }, 00:17:54.075 "memory_domains": [ 00:17:54.075 { 00:17:54.075 "dma_device_id": "system", 00:17:54.075 "dma_device_type": 1 00:17:54.075 }, 00:17:54.075 { 00:17:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.075 "dma_device_type": 2 00:17:54.075 } 00:17:54.075 ], 00:17:54.075 "driver_specific": {} 00:17:54.075 } 00:17:54.075 ] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 BaseBdev3 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.075 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 [ 00:17:54.075 { 00:17:54.075 "name": "BaseBdev3", 00:17:54.075 "aliases": [ 00:17:54.075 "3f3adb6f-f28c-4c78-b601-71b6e1e284dc" 00:17:54.075 ], 00:17:54.075 "product_name": "Malloc disk", 00:17:54.075 "block_size": 512, 00:17:54.075 "num_blocks": 65536, 00:17:54.075 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:54.075 "assigned_rate_limits": { 00:17:54.075 "rw_ios_per_sec": 0, 00:17:54.075 "rw_mbytes_per_sec": 0, 00:17:54.075 "r_mbytes_per_sec": 0, 00:17:54.075 "w_mbytes_per_sec": 0 00:17:54.075 }, 00:17:54.075 "claimed": false, 00:17:54.075 "zoned": false, 00:17:54.075 "supported_io_types": { 00:17:54.075 "read": true, 00:17:54.075 "write": true, 00:17:54.075 "unmap": true, 00:17:54.075 "flush": true, 00:17:54.075 "reset": true, 00:17:54.075 "nvme_admin": false, 00:17:54.075 "nvme_io": false, 00:17:54.075 "nvme_io_md": false, 00:17:54.075 "write_zeroes": true, 00:17:54.075 "zcopy": true, 00:17:54.075 "get_zone_info": false, 00:17:54.075 "zone_management": false, 00:17:54.075 "zone_append": false, 00:17:54.075 "compare": false, 00:17:54.075 "compare_and_write": false, 00:17:54.075 "abort": true, 00:17:54.075 "seek_hole": false, 00:17:54.075 "seek_data": false, 00:17:54.075 "copy": true, 00:17:54.075 "nvme_iov_md": false 00:17:54.075 }, 00:17:54.075 "memory_domains": [ 00:17:54.075 { 00:17:54.076 "dma_device_id": "system", 00:17:54.076 "dma_device_type": 1 00:17:54.076 }, 00:17:54.076 { 00:17:54.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.076 "dma_device_type": 2 00:17:54.076 } 00:17:54.076 ], 00:17:54.076 "driver_specific": {} 00:17:54.076 } 00:17:54.076 ] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 BaseBdev4 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 [ 00:17:54.076 { 00:17:54.076 "name": "BaseBdev4", 00:17:54.076 "aliases": [ 00:17:54.076 "1c31b763-a800-413a-8b39-0ebd83421727" 00:17:54.076 ], 00:17:54.076 "product_name": "Malloc disk", 00:17:54.076 "block_size": 512, 00:17:54.076 "num_blocks": 65536, 00:17:54.076 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:54.076 "assigned_rate_limits": { 00:17:54.076 "rw_ios_per_sec": 0, 00:17:54.076 "rw_mbytes_per_sec": 0, 00:17:54.076 "r_mbytes_per_sec": 0, 00:17:54.076 "w_mbytes_per_sec": 0 00:17:54.076 }, 00:17:54.076 "claimed": false, 00:17:54.076 "zoned": false, 00:17:54.076 "supported_io_types": { 00:17:54.076 "read": true, 00:17:54.076 "write": true, 00:17:54.076 "unmap": true, 00:17:54.076 "flush": true, 00:17:54.076 "reset": true, 00:17:54.076 "nvme_admin": false, 00:17:54.076 "nvme_io": false, 00:17:54.076 "nvme_io_md": false, 00:17:54.076 "write_zeroes": true, 00:17:54.076 "zcopy": true, 00:17:54.076 "get_zone_info": false, 00:17:54.076 "zone_management": false, 00:17:54.076 "zone_append": false, 00:17:54.076 "compare": false, 00:17:54.076 "compare_and_write": false, 00:17:54.076 "abort": true, 00:17:54.076 "seek_hole": false, 00:17:54.076 "seek_data": false, 00:17:54.076 "copy": true, 00:17:54.076 "nvme_iov_md": false 00:17:54.076 }, 00:17:54.076 "memory_domains": [ 00:17:54.076 { 00:17:54.076 "dma_device_id": "system", 00:17:54.076 "dma_device_type": 1 00:17:54.076 }, 00:17:54.076 { 00:17:54.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.076 "dma_device_type": 2 00:17:54.076 } 00:17:54.076 ], 00:17:54.076 "driver_specific": {} 00:17:54.076 } 00:17:54.076 ] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 [2024-10-11 09:51:38.657476] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.076 [2024-10-11 09:51:38.658045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.076 [2024-10-11 09:51:38.658125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.076 [2024-10-11 09:51:38.660105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:54.076 [2024-10-11 09:51:38.660210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.336 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.336 "name": "Existed_Raid", 00:17:54.336 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:54.336 "strip_size_kb": 64, 00:17:54.336 "state": "configuring", 00:17:54.336 "raid_level": "raid5f", 00:17:54.336 "superblock": true, 00:17:54.336 "num_base_bdevs": 4, 00:17:54.336 "num_base_bdevs_discovered": 3, 00:17:54.336 "num_base_bdevs_operational": 4, 00:17:54.336 "base_bdevs_list": [ 00:17:54.336 { 00:17:54.336 "name": "BaseBdev1", 00:17:54.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.336 "is_configured": false, 00:17:54.336 "data_offset": 0, 00:17:54.336 "data_size": 0 00:17:54.336 }, 00:17:54.336 { 00:17:54.336 "name": "BaseBdev2", 00:17:54.336 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:54.336 "is_configured": true, 00:17:54.336 "data_offset": 2048, 00:17:54.336 "data_size": 63488 00:17:54.336 }, 00:17:54.336 { 00:17:54.336 "name": "BaseBdev3", 00:17:54.336 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:54.336 "is_configured": true, 00:17:54.336 "data_offset": 2048, 00:17:54.336 "data_size": 63488 00:17:54.336 }, 00:17:54.336 { 00:17:54.336 "name": "BaseBdev4", 00:17:54.336 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:54.336 "is_configured": true, 00:17:54.336 "data_offset": 2048, 00:17:54.336 "data_size": 63488 00:17:54.336 } 00:17:54.336 ] 00:17:54.336 }' 00:17:54.336 09:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.336 09:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.596 [2024-10-11 09:51:39.096826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.596 "name": "Existed_Raid", 00:17:54.596 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:54.596 "strip_size_kb": 64, 00:17:54.596 "state": "configuring", 00:17:54.596 "raid_level": "raid5f", 00:17:54.596 "superblock": true, 00:17:54.596 "num_base_bdevs": 4, 00:17:54.596 "num_base_bdevs_discovered": 2, 00:17:54.596 "num_base_bdevs_operational": 4, 00:17:54.596 "base_bdevs_list": [ 00:17:54.596 { 00:17:54.596 "name": "BaseBdev1", 00:17:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.596 "is_configured": false, 00:17:54.596 "data_offset": 0, 00:17:54.596 "data_size": 0 00:17:54.596 }, 00:17:54.596 { 00:17:54.596 "name": null, 00:17:54.596 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:54.596 "is_configured": false, 00:17:54.596 "data_offset": 0, 00:17:54.596 "data_size": 63488 00:17:54.596 }, 00:17:54.596 { 00:17:54.596 "name": "BaseBdev3", 00:17:54.596 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:54.596 "is_configured": true, 00:17:54.596 "data_offset": 2048, 00:17:54.596 "data_size": 63488 00:17:54.596 }, 00:17:54.596 { 00:17:54.596 "name": "BaseBdev4", 00:17:54.596 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:54.596 "is_configured": true, 00:17:54.596 "data_offset": 2048, 00:17:54.596 "data_size": 63488 00:17:54.596 } 00:17:54.596 ] 00:17:54.596 }' 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.596 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.179 [2024-10-11 09:51:39.561828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.179 BaseBdev1 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.179 [ 00:17:55.179 { 00:17:55.179 "name": "BaseBdev1", 00:17:55.179 "aliases": [ 00:17:55.179 "26f18aa9-c1e2-463e-a783-8e7273daf4ed" 00:17:55.179 ], 00:17:55.179 "product_name": "Malloc disk", 00:17:55.179 "block_size": 512, 00:17:55.179 "num_blocks": 65536, 00:17:55.179 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:55.179 "assigned_rate_limits": { 00:17:55.179 "rw_ios_per_sec": 0, 00:17:55.179 "rw_mbytes_per_sec": 0, 00:17:55.179 "r_mbytes_per_sec": 0, 00:17:55.179 "w_mbytes_per_sec": 0 00:17:55.179 }, 00:17:55.179 "claimed": true, 00:17:55.179 "claim_type": "exclusive_write", 00:17:55.179 "zoned": false, 00:17:55.179 "supported_io_types": { 00:17:55.179 "read": true, 00:17:55.179 "write": true, 00:17:55.179 "unmap": true, 00:17:55.179 "flush": true, 00:17:55.179 "reset": true, 00:17:55.179 "nvme_admin": false, 00:17:55.179 "nvme_io": false, 00:17:55.179 "nvme_io_md": false, 00:17:55.179 "write_zeroes": true, 00:17:55.179 "zcopy": true, 00:17:55.179 "get_zone_info": false, 00:17:55.179 "zone_management": false, 00:17:55.179 "zone_append": false, 00:17:55.179 "compare": false, 00:17:55.179 "compare_and_write": false, 00:17:55.179 "abort": true, 00:17:55.179 "seek_hole": false, 00:17:55.179 "seek_data": false, 00:17:55.179 "copy": true, 00:17:55.179 "nvme_iov_md": false 00:17:55.179 }, 00:17:55.179 "memory_domains": [ 00:17:55.179 { 00:17:55.179 "dma_device_id": "system", 00:17:55.179 "dma_device_type": 1 00:17:55.179 }, 00:17:55.179 { 00:17:55.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.179 "dma_device_type": 2 00:17:55.179 } 00:17:55.179 ], 00:17:55.179 "driver_specific": {} 00:17:55.179 } 00:17:55.179 ] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.179 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.179 "name": "Existed_Raid", 00:17:55.179 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:55.179 "strip_size_kb": 64, 00:17:55.179 "state": "configuring", 00:17:55.179 "raid_level": "raid5f", 00:17:55.179 "superblock": true, 00:17:55.179 "num_base_bdevs": 4, 00:17:55.180 "num_base_bdevs_discovered": 3, 00:17:55.180 "num_base_bdevs_operational": 4, 00:17:55.180 "base_bdevs_list": [ 00:17:55.180 { 00:17:55.180 "name": "BaseBdev1", 00:17:55.180 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:55.180 "is_configured": true, 00:17:55.180 "data_offset": 2048, 00:17:55.180 "data_size": 63488 00:17:55.180 }, 00:17:55.180 { 00:17:55.180 "name": null, 00:17:55.180 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:55.180 "is_configured": false, 00:17:55.180 "data_offset": 0, 00:17:55.180 "data_size": 63488 00:17:55.180 }, 00:17:55.180 { 00:17:55.180 "name": "BaseBdev3", 00:17:55.180 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:55.180 "is_configured": true, 00:17:55.180 "data_offset": 2048, 00:17:55.180 "data_size": 63488 00:17:55.180 }, 00:17:55.180 { 00:17:55.180 "name": "BaseBdev4", 00:17:55.180 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:55.180 "is_configured": true, 00:17:55.180 "data_offset": 2048, 00:17:55.180 "data_size": 63488 00:17:55.180 } 00:17:55.180 ] 00:17:55.180 }' 00:17:55.180 09:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.180 09:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.455 [2024-10-11 09:51:40.061007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.455 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.715 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.715 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.715 "name": "Existed_Raid", 00:17:55.715 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:55.715 "strip_size_kb": 64, 00:17:55.715 "state": "configuring", 00:17:55.715 "raid_level": "raid5f", 00:17:55.715 "superblock": true, 00:17:55.715 "num_base_bdevs": 4, 00:17:55.715 "num_base_bdevs_discovered": 2, 00:17:55.715 "num_base_bdevs_operational": 4, 00:17:55.715 "base_bdevs_list": [ 00:17:55.715 { 00:17:55.715 "name": "BaseBdev1", 00:17:55.715 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:55.715 "is_configured": true, 00:17:55.715 "data_offset": 2048, 00:17:55.715 "data_size": 63488 00:17:55.715 }, 00:17:55.715 { 00:17:55.715 "name": null, 00:17:55.715 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:55.715 "is_configured": false, 00:17:55.715 "data_offset": 0, 00:17:55.715 "data_size": 63488 00:17:55.715 }, 00:17:55.715 { 00:17:55.715 "name": null, 00:17:55.715 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:55.715 "is_configured": false, 00:17:55.715 "data_offset": 0, 00:17:55.715 "data_size": 63488 00:17:55.715 }, 00:17:55.715 { 00:17:55.715 "name": "BaseBdev4", 00:17:55.715 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:55.715 "is_configured": true, 00:17:55.715 "data_offset": 2048, 00:17:55.715 "data_size": 63488 00:17:55.715 } 00:17:55.715 ] 00:17:55.715 }' 00:17:55.715 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.715 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.975 [2024-10-11 09:51:40.540204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.975 "name": "Existed_Raid", 00:17:55.975 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:55.975 "strip_size_kb": 64, 00:17:55.975 "state": "configuring", 00:17:55.975 "raid_level": "raid5f", 00:17:55.975 "superblock": true, 00:17:55.975 "num_base_bdevs": 4, 00:17:55.975 "num_base_bdevs_discovered": 3, 00:17:55.975 "num_base_bdevs_operational": 4, 00:17:55.975 "base_bdevs_list": [ 00:17:55.975 { 00:17:55.975 "name": "BaseBdev1", 00:17:55.975 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:55.975 "is_configured": true, 00:17:55.975 "data_offset": 2048, 00:17:55.975 "data_size": 63488 00:17:55.975 }, 00:17:55.975 { 00:17:55.975 "name": null, 00:17:55.975 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:55.975 "is_configured": false, 00:17:55.975 "data_offset": 0, 00:17:55.975 "data_size": 63488 00:17:55.975 }, 00:17:55.975 { 00:17:55.975 "name": "BaseBdev3", 00:17:55.975 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:55.975 "is_configured": true, 00:17:55.975 "data_offset": 2048, 00:17:55.975 "data_size": 63488 00:17:55.975 }, 00:17:55.975 { 00:17:55.975 "name": "BaseBdev4", 00:17:55.975 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:55.975 "is_configured": true, 00:17:55.975 "data_offset": 2048, 00:17:55.975 "data_size": 63488 00:17:55.975 } 00:17:55.975 ] 00:17:55.975 }' 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.975 09:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.543 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.543 [2024-10-11 09:51:41.095302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.802 "name": "Existed_Raid", 00:17:56.802 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:56.802 "strip_size_kb": 64, 00:17:56.802 "state": "configuring", 00:17:56.802 "raid_level": "raid5f", 00:17:56.802 "superblock": true, 00:17:56.802 "num_base_bdevs": 4, 00:17:56.802 "num_base_bdevs_discovered": 2, 00:17:56.802 "num_base_bdevs_operational": 4, 00:17:56.802 "base_bdevs_list": [ 00:17:56.802 { 00:17:56.802 "name": null, 00:17:56.802 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:56.802 "is_configured": false, 00:17:56.802 "data_offset": 0, 00:17:56.802 "data_size": 63488 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "name": null, 00:17:56.802 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:56.802 "is_configured": false, 00:17:56.802 "data_offset": 0, 00:17:56.802 "data_size": 63488 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "name": "BaseBdev3", 00:17:56.802 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:56.802 "is_configured": true, 00:17:56.802 "data_offset": 2048, 00:17:56.802 "data_size": 63488 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "name": "BaseBdev4", 00:17:56.802 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:56.802 "is_configured": true, 00:17:56.802 "data_offset": 2048, 00:17:56.802 "data_size": 63488 00:17:56.802 } 00:17:56.802 ] 00:17:56.802 }' 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.802 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.062 [2024-10-11 09:51:41.669960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.062 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.063 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.322 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.322 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.322 "name": "Existed_Raid", 00:17:57.322 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:57.322 "strip_size_kb": 64, 00:17:57.322 "state": "configuring", 00:17:57.322 "raid_level": "raid5f", 00:17:57.322 "superblock": true, 00:17:57.322 "num_base_bdevs": 4, 00:17:57.322 "num_base_bdevs_discovered": 3, 00:17:57.322 "num_base_bdevs_operational": 4, 00:17:57.322 "base_bdevs_list": [ 00:17:57.322 { 00:17:57.322 "name": null, 00:17:57.322 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:57.322 "is_configured": false, 00:17:57.322 "data_offset": 0, 00:17:57.322 "data_size": 63488 00:17:57.322 }, 00:17:57.322 { 00:17:57.322 "name": "BaseBdev2", 00:17:57.322 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:57.322 "is_configured": true, 00:17:57.322 "data_offset": 2048, 00:17:57.322 "data_size": 63488 00:17:57.322 }, 00:17:57.322 { 00:17:57.322 "name": "BaseBdev3", 00:17:57.322 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:57.322 "is_configured": true, 00:17:57.322 "data_offset": 2048, 00:17:57.322 "data_size": 63488 00:17:57.322 }, 00:17:57.322 { 00:17:57.322 "name": "BaseBdev4", 00:17:57.322 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:57.322 "is_configured": true, 00:17:57.322 "data_offset": 2048, 00:17:57.322 "data_size": 63488 00:17:57.322 } 00:17:57.322 ] 00:17:57.322 }' 00:17:57.322 09:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.322 09:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 26f18aa9-c1e2-463e-a783-8e7273daf4ed 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.582 [2024-10-11 09:51:42.168062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:57.582 [2024-10-11 09:51:42.168463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:57.582 [2024-10-11 09:51:42.168522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:57.582 [2024-10-11 09:51:42.168857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:57.582 NewBaseBdev 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.582 [2024-10-11 09:51:42.176841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:57.582 [2024-10-11 09:51:42.176908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:57.582 [2024-10-11 09:51:42.177225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.582 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.582 [ 00:17:57.582 { 00:17:57.582 "name": "NewBaseBdev", 00:17:57.582 "aliases": [ 00:17:57.582 "26f18aa9-c1e2-463e-a783-8e7273daf4ed" 00:17:57.582 ], 00:17:57.582 "product_name": "Malloc disk", 00:17:57.582 "block_size": 512, 00:17:57.582 "num_blocks": 65536, 00:17:57.582 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:57.582 "assigned_rate_limits": { 00:17:57.582 "rw_ios_per_sec": 0, 00:17:57.582 "rw_mbytes_per_sec": 0, 00:17:57.582 "r_mbytes_per_sec": 0, 00:17:57.582 "w_mbytes_per_sec": 0 00:17:57.582 }, 00:17:57.582 "claimed": true, 00:17:57.582 "claim_type": "exclusive_write", 00:17:57.582 "zoned": false, 00:17:57.582 "supported_io_types": { 00:17:57.582 "read": true, 00:17:57.582 "write": true, 00:17:57.582 "unmap": true, 00:17:57.582 "flush": true, 00:17:57.582 "reset": true, 00:17:57.582 "nvme_admin": false, 00:17:57.582 "nvme_io": false, 00:17:57.582 "nvme_io_md": false, 00:17:57.582 "write_zeroes": true, 00:17:57.582 "zcopy": true, 00:17:57.582 "get_zone_info": false, 00:17:57.582 "zone_management": false, 00:17:57.582 "zone_append": false, 00:17:57.582 "compare": false, 00:17:57.582 "compare_and_write": false, 00:17:57.582 "abort": true, 00:17:57.582 "seek_hole": false, 00:17:57.582 "seek_data": false, 00:17:57.582 "copy": true, 00:17:57.582 "nvme_iov_md": false 00:17:57.582 }, 00:17:57.582 "memory_domains": [ 00:17:57.582 { 00:17:57.582 "dma_device_id": "system", 00:17:57.582 "dma_device_type": 1 00:17:57.583 }, 00:17:57.583 { 00:17:57.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.583 "dma_device_type": 2 00:17:57.583 } 00:17:57.583 ], 00:17:57.583 "driver_specific": {} 00:17:57.583 } 00:17:57.583 ] 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.583 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.842 "name": "Existed_Raid", 00:17:57.842 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:57.842 "strip_size_kb": 64, 00:17:57.842 "state": "online", 00:17:57.842 "raid_level": "raid5f", 00:17:57.842 "superblock": true, 00:17:57.842 "num_base_bdevs": 4, 00:17:57.842 "num_base_bdevs_discovered": 4, 00:17:57.842 "num_base_bdevs_operational": 4, 00:17:57.842 "base_bdevs_list": [ 00:17:57.842 { 00:17:57.842 "name": "NewBaseBdev", 00:17:57.842 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:57.842 "is_configured": true, 00:17:57.842 "data_offset": 2048, 00:17:57.842 "data_size": 63488 00:17:57.842 }, 00:17:57.842 { 00:17:57.842 "name": "BaseBdev2", 00:17:57.842 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:57.842 "is_configured": true, 00:17:57.842 "data_offset": 2048, 00:17:57.842 "data_size": 63488 00:17:57.842 }, 00:17:57.842 { 00:17:57.842 "name": "BaseBdev3", 00:17:57.842 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:57.842 "is_configured": true, 00:17:57.842 "data_offset": 2048, 00:17:57.842 "data_size": 63488 00:17:57.842 }, 00:17:57.842 { 00:17:57.842 "name": "BaseBdev4", 00:17:57.842 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:57.842 "is_configured": true, 00:17:57.842 "data_offset": 2048, 00:17:57.842 "data_size": 63488 00:17:57.842 } 00:17:57.842 ] 00:17:57.842 }' 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.842 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.103 [2024-10-11 09:51:42.679941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.103 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.103 "name": "Existed_Raid", 00:17:58.103 "aliases": [ 00:17:58.103 "2f058b67-ab6e-45eb-90bc-c63e703dfde0" 00:17:58.103 ], 00:17:58.103 "product_name": "Raid Volume", 00:17:58.103 "block_size": 512, 00:17:58.103 "num_blocks": 190464, 00:17:58.103 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:58.103 "assigned_rate_limits": { 00:17:58.103 "rw_ios_per_sec": 0, 00:17:58.103 "rw_mbytes_per_sec": 0, 00:17:58.103 "r_mbytes_per_sec": 0, 00:17:58.103 "w_mbytes_per_sec": 0 00:17:58.103 }, 00:17:58.103 "claimed": false, 00:17:58.103 "zoned": false, 00:17:58.103 "supported_io_types": { 00:17:58.103 "read": true, 00:17:58.103 "write": true, 00:17:58.103 "unmap": false, 00:17:58.103 "flush": false, 00:17:58.103 "reset": true, 00:17:58.103 "nvme_admin": false, 00:17:58.103 "nvme_io": false, 00:17:58.103 "nvme_io_md": false, 00:17:58.103 "write_zeroes": true, 00:17:58.103 "zcopy": false, 00:17:58.103 "get_zone_info": false, 00:17:58.103 "zone_management": false, 00:17:58.103 "zone_append": false, 00:17:58.103 "compare": false, 00:17:58.103 "compare_and_write": false, 00:17:58.103 "abort": false, 00:17:58.103 "seek_hole": false, 00:17:58.103 "seek_data": false, 00:17:58.103 "copy": false, 00:17:58.103 "nvme_iov_md": false 00:17:58.103 }, 00:17:58.103 "driver_specific": { 00:17:58.103 "raid": { 00:17:58.103 "uuid": "2f058b67-ab6e-45eb-90bc-c63e703dfde0", 00:17:58.103 "strip_size_kb": 64, 00:17:58.103 "state": "online", 00:17:58.103 "raid_level": "raid5f", 00:17:58.103 "superblock": true, 00:17:58.103 "num_base_bdevs": 4, 00:17:58.103 "num_base_bdevs_discovered": 4, 00:17:58.103 "num_base_bdevs_operational": 4, 00:17:58.103 "base_bdevs_list": [ 00:17:58.103 { 00:17:58.103 "name": "NewBaseBdev", 00:17:58.103 "uuid": "26f18aa9-c1e2-463e-a783-8e7273daf4ed", 00:17:58.103 "is_configured": true, 00:17:58.103 "data_offset": 2048, 00:17:58.103 "data_size": 63488 00:17:58.103 }, 00:17:58.103 { 00:17:58.103 "name": "BaseBdev2", 00:17:58.103 "uuid": "c69d97af-172e-4595-b94c-e2afc63523a1", 00:17:58.103 "is_configured": true, 00:17:58.103 "data_offset": 2048, 00:17:58.103 "data_size": 63488 00:17:58.103 }, 00:17:58.103 { 00:17:58.103 "name": "BaseBdev3", 00:17:58.103 "uuid": "3f3adb6f-f28c-4c78-b601-71b6e1e284dc", 00:17:58.103 "is_configured": true, 00:17:58.103 "data_offset": 2048, 00:17:58.103 "data_size": 63488 00:17:58.103 }, 00:17:58.103 { 00:17:58.103 "name": "BaseBdev4", 00:17:58.103 "uuid": "1c31b763-a800-413a-8b39-0ebd83421727", 00:17:58.103 "is_configured": true, 00:17:58.103 "data_offset": 2048, 00:17:58.103 "data_size": 63488 00:17:58.103 } 00:17:58.103 ] 00:17:58.103 } 00:17:58.103 } 00:17:58.103 }' 00:17:58.104 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:58.363 BaseBdev2 00:17:58.363 BaseBdev3 00:17:58.363 BaseBdev4' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.363 09:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.623 09:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.623 [2024-10-11 09:51:43.035055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.623 [2024-10-11 09:51:43.035085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.623 [2024-10-11 09:51:43.035165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.623 [2024-10-11 09:51:43.035464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.623 [2024-10-11 09:51:43.035477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84021 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84021 ']' 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84021 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84021 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.623 killing process with pid 84021 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84021' 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84021 00:17:58.623 [2024-10-11 09:51:43.084205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.623 09:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84021 00:17:58.882 [2024-10-11 09:51:43.462663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.259 09:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:00.259 00:18:00.259 real 0m11.572s 00:18:00.259 user 0m18.436s 00:18:00.259 sys 0m2.109s 00:18:00.259 09:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.259 ************************************ 00:18:00.259 END TEST raid5f_state_function_test_sb 00:18:00.259 ************************************ 00:18:00.259 09:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.259 09:51:44 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:00.259 09:51:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:00.259 09:51:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.259 09:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.259 ************************************ 00:18:00.259 START TEST raid5f_superblock_test 00:18:00.259 ************************************ 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:00.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84692 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84692 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84692 ']' 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.259 09:51:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:00.259 [2024-10-11 09:51:44.722687] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:18:00.259 [2024-10-11 09:51:44.723355] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84692 ] 00:18:00.259 [2024-10-11 09:51:44.888404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.518 [2024-10-11 09:51:45.026411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.776 [2024-10-11 09:51:45.253296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.776 [2024-10-11 09:51:45.253344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.035 malloc1 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.035 [2024-10-11 09:51:45.618993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.035 [2024-10-11 09:51:45.619268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.035 [2024-10-11 09:51:45.619383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:01.035 [2024-10-11 09:51:45.619469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.035 [2024-10-11 09:51:45.621825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.035 [2024-10-11 09:51:45.621987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.035 pt1 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.035 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.297 malloc2 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.297 [2024-10-11 09:51:45.682500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.297 [2024-10-11 09:51:45.682808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.297 [2024-10-11 09:51:45.682925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:01.297 [2024-10-11 09:51:45.683017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.297 [2024-10-11 09:51:45.685432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.297 [2024-10-11 09:51:45.685577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.297 pt2 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.297 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 malloc3 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 [2024-10-11 09:51:45.755248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.298 [2024-10-11 09:51:45.755675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.298 [2024-10-11 09:51:45.755794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.298 [2024-10-11 09:51:45.755837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.298 [2024-10-11 09:51:45.758262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.298 [2024-10-11 09:51:45.758386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.298 pt3 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 malloc4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 [2024-10-11 09:51:45.818520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.298 [2024-10-11 09:51:45.818760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.298 [2024-10-11 09:51:45.818832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:01.298 [2024-10-11 09:51:45.818927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.298 [2024-10-11 09:51:45.821225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.298 [2024-10-11 09:51:45.821371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.298 pt4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 [2024-10-11 09:51:45.830545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.298 [2024-10-11 09:51:45.832430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.298 [2024-10-11 09:51:45.832538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.298 [2024-10-11 09:51:45.832607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.298 [2024-10-11 09:51:45.832851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:01.298 [2024-10-11 09:51:45.832866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.298 [2024-10-11 09:51:45.833123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:01.298 [2024-10-11 09:51:45.841088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:01.298 [2024-10-11 09:51:45.841142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:01.298 [2024-10-11 09:51:45.841377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.298 "name": "raid_bdev1", 00:18:01.298 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:01.298 "strip_size_kb": 64, 00:18:01.298 "state": "online", 00:18:01.298 "raid_level": "raid5f", 00:18:01.298 "superblock": true, 00:18:01.298 "num_base_bdevs": 4, 00:18:01.298 "num_base_bdevs_discovered": 4, 00:18:01.298 "num_base_bdevs_operational": 4, 00:18:01.298 "base_bdevs_list": [ 00:18:01.298 { 00:18:01.298 "name": "pt1", 00:18:01.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.298 "is_configured": true, 00:18:01.298 "data_offset": 2048, 00:18:01.298 "data_size": 63488 00:18:01.298 }, 00:18:01.298 { 00:18:01.298 "name": "pt2", 00:18:01.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.298 "is_configured": true, 00:18:01.298 "data_offset": 2048, 00:18:01.298 "data_size": 63488 00:18:01.298 }, 00:18:01.298 { 00:18:01.298 "name": "pt3", 00:18:01.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.298 "is_configured": true, 00:18:01.298 "data_offset": 2048, 00:18:01.298 "data_size": 63488 00:18:01.298 }, 00:18:01.298 { 00:18:01.298 "name": "pt4", 00:18:01.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.298 "is_configured": true, 00:18:01.298 "data_offset": 2048, 00:18:01.298 "data_size": 63488 00:18:01.298 } 00:18:01.298 ] 00:18:01.298 }' 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.298 09:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.866 [2024-10-11 09:51:46.324543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.866 "name": "raid_bdev1", 00:18:01.866 "aliases": [ 00:18:01.866 "64d96e15-1f26-43c1-9588-5a1957f0d5e7" 00:18:01.866 ], 00:18:01.866 "product_name": "Raid Volume", 00:18:01.866 "block_size": 512, 00:18:01.866 "num_blocks": 190464, 00:18:01.866 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:01.866 "assigned_rate_limits": { 00:18:01.866 "rw_ios_per_sec": 0, 00:18:01.866 "rw_mbytes_per_sec": 0, 00:18:01.866 "r_mbytes_per_sec": 0, 00:18:01.866 "w_mbytes_per_sec": 0 00:18:01.866 }, 00:18:01.866 "claimed": false, 00:18:01.866 "zoned": false, 00:18:01.866 "supported_io_types": { 00:18:01.866 "read": true, 00:18:01.866 "write": true, 00:18:01.866 "unmap": false, 00:18:01.866 "flush": false, 00:18:01.866 "reset": true, 00:18:01.866 "nvme_admin": false, 00:18:01.866 "nvme_io": false, 00:18:01.866 "nvme_io_md": false, 00:18:01.866 "write_zeroes": true, 00:18:01.866 "zcopy": false, 00:18:01.866 "get_zone_info": false, 00:18:01.866 "zone_management": false, 00:18:01.866 "zone_append": false, 00:18:01.866 "compare": false, 00:18:01.866 "compare_and_write": false, 00:18:01.866 "abort": false, 00:18:01.866 "seek_hole": false, 00:18:01.866 "seek_data": false, 00:18:01.866 "copy": false, 00:18:01.866 "nvme_iov_md": false 00:18:01.866 }, 00:18:01.866 "driver_specific": { 00:18:01.866 "raid": { 00:18:01.866 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:01.866 "strip_size_kb": 64, 00:18:01.866 "state": "online", 00:18:01.866 "raid_level": "raid5f", 00:18:01.866 "superblock": true, 00:18:01.866 "num_base_bdevs": 4, 00:18:01.866 "num_base_bdevs_discovered": 4, 00:18:01.866 "num_base_bdevs_operational": 4, 00:18:01.866 "base_bdevs_list": [ 00:18:01.866 { 00:18:01.866 "name": "pt1", 00:18:01.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.866 "is_configured": true, 00:18:01.866 "data_offset": 2048, 00:18:01.866 "data_size": 63488 00:18:01.866 }, 00:18:01.866 { 00:18:01.866 "name": "pt2", 00:18:01.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.866 "is_configured": true, 00:18:01.866 "data_offset": 2048, 00:18:01.866 "data_size": 63488 00:18:01.866 }, 00:18:01.866 { 00:18:01.866 "name": "pt3", 00:18:01.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.866 "is_configured": true, 00:18:01.866 "data_offset": 2048, 00:18:01.866 "data_size": 63488 00:18:01.866 }, 00:18:01.866 { 00:18:01.866 "name": "pt4", 00:18:01.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.866 "is_configured": true, 00:18:01.866 "data_offset": 2048, 00:18:01.866 "data_size": 63488 00:18:01.866 } 00:18:01.866 ] 00:18:01.866 } 00:18:01.866 } 00:18:01.866 }' 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.866 pt2 00:18:01.866 pt3 00:18:01.866 pt4' 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.866 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 [2024-10-11 09:51:46.656070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64d96e15-1f26-43c1-9588-5a1957f0d5e7 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64d96e15-1f26-43c1-9588-5a1957f0d5e7 ']' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 [2024-10-11 09:51:46.679797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.126 [2024-10-11 09:51:46.679823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.126 [2024-10-11 09:51:46.679899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.126 [2024-10-11 09:51:46.679982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.126 [2024-10-11 09:51:46.679996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.126 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:02.388 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.389 [2024-10-11 09:51:46.823556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.389 [2024-10-11 09:51:46.825640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:02.389 [2024-10-11 09:51:46.825751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:02.389 [2024-10-11 09:51:46.825823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:02.389 [2024-10-11 09:51:46.825908] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:02.389 [2024-10-11 09:51:46.826366] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:02.389 [2024-10-11 09:51:46.826495] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:02.389 [2024-10-11 09:51:46.826631] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:02.389 [2024-10-11 09:51:46.826730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.389 [2024-10-11 09:51:46.826757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:02.389 request: 00:18:02.389 { 00:18:02.389 "name": "raid_bdev1", 00:18:02.389 "raid_level": "raid5f", 00:18:02.389 "base_bdevs": [ 00:18:02.389 "malloc1", 00:18:02.389 "malloc2", 00:18:02.389 "malloc3", 00:18:02.389 "malloc4" 00:18:02.389 ], 00:18:02.389 "strip_size_kb": 64, 00:18:02.389 "superblock": false, 00:18:02.389 "method": "bdev_raid_create", 00:18:02.389 "req_id": 1 00:18:02.389 } 00:18:02.389 Got JSON-RPC error response 00:18:02.389 response: 00:18:02.389 { 00:18:02.389 "code": -17, 00:18:02.389 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:02.389 } 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.389 [2024-10-11 09:51:46.883439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.389 [2024-10-11 09:51:46.883695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.389 [2024-10-11 09:51:46.883842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:02.389 [2024-10-11 09:51:46.883943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.389 [2024-10-11 09:51:46.886512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.389 [2024-10-11 09:51:46.886669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.389 [2024-10-11 09:51:46.886857] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.389 [2024-10-11 09:51:46.886974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.389 pt1 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.389 "name": "raid_bdev1", 00:18:02.389 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:02.389 "strip_size_kb": 64, 00:18:02.389 "state": "configuring", 00:18:02.389 "raid_level": "raid5f", 00:18:02.389 "superblock": true, 00:18:02.389 "num_base_bdevs": 4, 00:18:02.389 "num_base_bdevs_discovered": 1, 00:18:02.389 "num_base_bdevs_operational": 4, 00:18:02.389 "base_bdevs_list": [ 00:18:02.389 { 00:18:02.389 "name": "pt1", 00:18:02.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.389 "is_configured": true, 00:18:02.389 "data_offset": 2048, 00:18:02.389 "data_size": 63488 00:18:02.389 }, 00:18:02.389 { 00:18:02.389 "name": null, 00:18:02.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.389 "is_configured": false, 00:18:02.389 "data_offset": 2048, 00:18:02.389 "data_size": 63488 00:18:02.389 }, 00:18:02.389 { 00:18:02.389 "name": null, 00:18:02.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.389 "is_configured": false, 00:18:02.389 "data_offset": 2048, 00:18:02.389 "data_size": 63488 00:18:02.389 }, 00:18:02.389 { 00:18:02.389 "name": null, 00:18:02.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.389 "is_configured": false, 00:18:02.389 "data_offset": 2048, 00:18:02.389 "data_size": 63488 00:18:02.389 } 00:18:02.389 ] 00:18:02.389 }' 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.389 09:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.958 [2024-10-11 09:51:47.358638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.958 [2024-10-11 09:51:47.358940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.958 [2024-10-11 09:51:47.359013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:02.958 [2024-10-11 09:51:47.359076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.958 [2024-10-11 09:51:47.359701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.958 [2024-10-11 09:51:47.359831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.958 [2024-10-11 09:51:47.359977] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.958 [2024-10-11 09:51:47.360011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.958 pt2 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.958 [2024-10-11 09:51:47.370612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.958 "name": "raid_bdev1", 00:18:02.958 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:02.958 "strip_size_kb": 64, 00:18:02.958 "state": "configuring", 00:18:02.958 "raid_level": "raid5f", 00:18:02.958 "superblock": true, 00:18:02.958 "num_base_bdevs": 4, 00:18:02.958 "num_base_bdevs_discovered": 1, 00:18:02.958 "num_base_bdevs_operational": 4, 00:18:02.958 "base_bdevs_list": [ 00:18:02.958 { 00:18:02.958 "name": "pt1", 00:18:02.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.958 "is_configured": true, 00:18:02.958 "data_offset": 2048, 00:18:02.958 "data_size": 63488 00:18:02.958 }, 00:18:02.958 { 00:18:02.958 "name": null, 00:18:02.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.958 "is_configured": false, 00:18:02.958 "data_offset": 0, 00:18:02.958 "data_size": 63488 00:18:02.958 }, 00:18:02.958 { 00:18:02.958 "name": null, 00:18:02.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.958 "is_configured": false, 00:18:02.958 "data_offset": 2048, 00:18:02.958 "data_size": 63488 00:18:02.958 }, 00:18:02.958 { 00:18:02.958 "name": null, 00:18:02.958 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.958 "is_configured": false, 00:18:02.958 "data_offset": 2048, 00:18:02.958 "data_size": 63488 00:18:02.958 } 00:18:02.958 ] 00:18:02.958 }' 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.958 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 [2024-10-11 09:51:47.773958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.218 [2024-10-11 09:51:47.774247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.218 [2024-10-11 09:51:47.774302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:03.218 [2024-10-11 09:51:47.774453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.218 [2024-10-11 09:51:47.775040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.218 [2024-10-11 09:51:47.775227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.218 [2024-10-11 09:51:47.775415] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.218 [2024-10-11 09:51:47.775480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.218 pt2 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.218 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.219 [2024-10-11 09:51:47.789909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.219 [2024-10-11 09:51:47.790113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.219 [2024-10-11 09:51:47.790231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:03.219 [2024-10-11 09:51:47.790318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.219 [2024-10-11 09:51:47.790843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.219 [2024-10-11 09:51:47.790966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.219 [2024-10-11 09:51:47.791129] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:03.219 [2024-10-11 09:51:47.791188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.219 pt3 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.219 [2024-10-11 09:51:47.801851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.219 [2024-10-11 09:51:47.802001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.219 [2024-10-11 09:51:47.802067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:03.219 [2024-10-11 09:51:47.802145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.219 [2024-10-11 09:51:47.802577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.219 [2024-10-11 09:51:47.802679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.219 [2024-10-11 09:51:47.802865] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:03.219 [2024-10-11 09:51:47.802945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.219 [2024-10-11 09:51:47.803129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:03.219 [2024-10-11 09:51:47.803169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:03.219 [2024-10-11 09:51:47.803458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:03.219 [2024-10-11 09:51:47.811131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:03.219 [2024-10-11 09:51:47.811186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:03.219 [2024-10-11 09:51:47.811411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.219 pt4 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.219 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.479 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.479 "name": "raid_bdev1", 00:18:03.479 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:03.479 "strip_size_kb": 64, 00:18:03.479 "state": "online", 00:18:03.479 "raid_level": "raid5f", 00:18:03.479 "superblock": true, 00:18:03.479 "num_base_bdevs": 4, 00:18:03.479 "num_base_bdevs_discovered": 4, 00:18:03.479 "num_base_bdevs_operational": 4, 00:18:03.479 "base_bdevs_list": [ 00:18:03.479 { 00:18:03.479 "name": "pt1", 00:18:03.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.479 "is_configured": true, 00:18:03.479 "data_offset": 2048, 00:18:03.479 "data_size": 63488 00:18:03.479 }, 00:18:03.479 { 00:18:03.479 "name": "pt2", 00:18:03.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.479 "is_configured": true, 00:18:03.479 "data_offset": 2048, 00:18:03.479 "data_size": 63488 00:18:03.479 }, 00:18:03.479 { 00:18:03.479 "name": "pt3", 00:18:03.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.479 "is_configured": true, 00:18:03.479 "data_offset": 2048, 00:18:03.479 "data_size": 63488 00:18:03.479 }, 00:18:03.479 { 00:18:03.479 "name": "pt4", 00:18:03.479 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.479 "is_configured": true, 00:18:03.479 "data_offset": 2048, 00:18:03.479 "data_size": 63488 00:18:03.479 } 00:18:03.479 ] 00:18:03.479 }' 00:18:03.479 09:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.479 09:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.738 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.739 [2024-10-11 09:51:48.278536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.739 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.739 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.739 "name": "raid_bdev1", 00:18:03.739 "aliases": [ 00:18:03.739 "64d96e15-1f26-43c1-9588-5a1957f0d5e7" 00:18:03.739 ], 00:18:03.739 "product_name": "Raid Volume", 00:18:03.739 "block_size": 512, 00:18:03.739 "num_blocks": 190464, 00:18:03.739 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:03.739 "assigned_rate_limits": { 00:18:03.739 "rw_ios_per_sec": 0, 00:18:03.739 "rw_mbytes_per_sec": 0, 00:18:03.739 "r_mbytes_per_sec": 0, 00:18:03.739 "w_mbytes_per_sec": 0 00:18:03.739 }, 00:18:03.739 "claimed": false, 00:18:03.739 "zoned": false, 00:18:03.739 "supported_io_types": { 00:18:03.739 "read": true, 00:18:03.739 "write": true, 00:18:03.739 "unmap": false, 00:18:03.739 "flush": false, 00:18:03.739 "reset": true, 00:18:03.739 "nvme_admin": false, 00:18:03.739 "nvme_io": false, 00:18:03.739 "nvme_io_md": false, 00:18:03.739 "write_zeroes": true, 00:18:03.739 "zcopy": false, 00:18:03.739 "get_zone_info": false, 00:18:03.739 "zone_management": false, 00:18:03.739 "zone_append": false, 00:18:03.739 "compare": false, 00:18:03.739 "compare_and_write": false, 00:18:03.739 "abort": false, 00:18:03.739 "seek_hole": false, 00:18:03.739 "seek_data": false, 00:18:03.739 "copy": false, 00:18:03.739 "nvme_iov_md": false 00:18:03.739 }, 00:18:03.739 "driver_specific": { 00:18:03.739 "raid": { 00:18:03.739 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:03.739 "strip_size_kb": 64, 00:18:03.739 "state": "online", 00:18:03.739 "raid_level": "raid5f", 00:18:03.739 "superblock": true, 00:18:03.739 "num_base_bdevs": 4, 00:18:03.739 "num_base_bdevs_discovered": 4, 00:18:03.739 "num_base_bdevs_operational": 4, 00:18:03.739 "base_bdevs_list": [ 00:18:03.739 { 00:18:03.739 "name": "pt1", 00:18:03.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.739 "is_configured": true, 00:18:03.739 "data_offset": 2048, 00:18:03.739 "data_size": 63488 00:18:03.739 }, 00:18:03.739 { 00:18:03.739 "name": "pt2", 00:18:03.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.739 "is_configured": true, 00:18:03.739 "data_offset": 2048, 00:18:03.739 "data_size": 63488 00:18:03.739 }, 00:18:03.739 { 00:18:03.739 "name": "pt3", 00:18:03.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.739 "is_configured": true, 00:18:03.739 "data_offset": 2048, 00:18:03.739 "data_size": 63488 00:18:03.739 }, 00:18:03.739 { 00:18:03.739 "name": "pt4", 00:18:03.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.739 "is_configured": true, 00:18:03.739 "data_offset": 2048, 00:18:03.739 "data_size": 63488 00:18:03.739 } 00:18:03.739 ] 00:18:03.739 } 00:18:03.739 } 00:18:03.739 }' 00:18:03.739 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.997 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:03.998 pt2 00:18:03.998 pt3 00:18:03.998 pt4' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.998 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.257 [2024-10-11 09:51:48.637917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64d96e15-1f26-43c1-9588-5a1957f0d5e7 '!=' 64d96e15-1f26-43c1-9588-5a1957f0d5e7 ']' 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.257 [2024-10-11 09:51:48.681694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.257 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.258 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.258 "name": "raid_bdev1", 00:18:04.258 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:04.258 "strip_size_kb": 64, 00:18:04.258 "state": "online", 00:18:04.258 "raid_level": "raid5f", 00:18:04.258 "superblock": true, 00:18:04.258 "num_base_bdevs": 4, 00:18:04.258 "num_base_bdevs_discovered": 3, 00:18:04.258 "num_base_bdevs_operational": 3, 00:18:04.258 "base_bdevs_list": [ 00:18:04.258 { 00:18:04.258 "name": null, 00:18:04.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.258 "is_configured": false, 00:18:04.258 "data_offset": 0, 00:18:04.258 "data_size": 63488 00:18:04.258 }, 00:18:04.258 { 00:18:04.258 "name": "pt2", 00:18:04.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.258 "is_configured": true, 00:18:04.258 "data_offset": 2048, 00:18:04.258 "data_size": 63488 00:18:04.258 }, 00:18:04.258 { 00:18:04.258 "name": "pt3", 00:18:04.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.258 "is_configured": true, 00:18:04.258 "data_offset": 2048, 00:18:04.258 "data_size": 63488 00:18:04.258 }, 00:18:04.258 { 00:18:04.258 "name": "pt4", 00:18:04.258 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.258 "is_configured": true, 00:18:04.258 "data_offset": 2048, 00:18:04.258 "data_size": 63488 00:18:04.258 } 00:18:04.258 ] 00:18:04.258 }' 00:18:04.258 09:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.258 09:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.518 [2024-10-11 09:51:49.096923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.518 [2024-10-11 09:51:49.097011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.518 [2024-10-11 09:51:49.097102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.518 [2024-10-11 09:51:49.097183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.518 [2024-10-11 09:51:49.097193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.518 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.777 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.778 [2024-10-11 09:51:49.192744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.778 [2024-10-11 09:51:49.192799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.778 [2024-10-11 09:51:49.192820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:04.778 [2024-10-11 09:51:49.192829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.778 [2024-10-11 09:51:49.195429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.778 [2024-10-11 09:51:49.195586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.778 [2024-10-11 09:51:49.195767] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:04.778 [2024-10-11 09:51:49.195818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.778 pt2 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.778 "name": "raid_bdev1", 00:18:04.778 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:04.778 "strip_size_kb": 64, 00:18:04.778 "state": "configuring", 00:18:04.778 "raid_level": "raid5f", 00:18:04.778 "superblock": true, 00:18:04.778 "num_base_bdevs": 4, 00:18:04.778 "num_base_bdevs_discovered": 1, 00:18:04.778 "num_base_bdevs_operational": 3, 00:18:04.778 "base_bdevs_list": [ 00:18:04.778 { 00:18:04.778 "name": null, 00:18:04.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.778 "is_configured": false, 00:18:04.778 "data_offset": 2048, 00:18:04.778 "data_size": 63488 00:18:04.778 }, 00:18:04.778 { 00:18:04.778 "name": "pt2", 00:18:04.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.778 "is_configured": true, 00:18:04.778 "data_offset": 2048, 00:18:04.778 "data_size": 63488 00:18:04.778 }, 00:18:04.778 { 00:18:04.778 "name": null, 00:18:04.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.778 "is_configured": false, 00:18:04.778 "data_offset": 2048, 00:18:04.778 "data_size": 63488 00:18:04.778 }, 00:18:04.778 { 00:18:04.778 "name": null, 00:18:04.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.778 "is_configured": false, 00:18:04.778 "data_offset": 2048, 00:18:04.778 "data_size": 63488 00:18:04.778 } 00:18:04.778 ] 00:18:04.778 }' 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.778 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.037 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:05.037 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:05.037 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:05.037 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.037 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.037 [2024-10-11 09:51:49.663990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:05.037 [2024-10-11 09:51:49.664304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.037 [2024-10-11 09:51:49.664432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:05.037 [2024-10-11 09:51:49.664527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.037 [2024-10-11 09:51:49.665125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.037 [2024-10-11 09:51:49.665251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:05.037 [2024-10-11 09:51:49.665446] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:05.037 [2024-10-11 09:51:49.665521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:05.037 pt3 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.295 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.296 "name": "raid_bdev1", 00:18:05.296 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:05.296 "strip_size_kb": 64, 00:18:05.296 "state": "configuring", 00:18:05.296 "raid_level": "raid5f", 00:18:05.296 "superblock": true, 00:18:05.296 "num_base_bdevs": 4, 00:18:05.296 "num_base_bdevs_discovered": 2, 00:18:05.296 "num_base_bdevs_operational": 3, 00:18:05.296 "base_bdevs_list": [ 00:18:05.296 { 00:18:05.296 "name": null, 00:18:05.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.296 "is_configured": false, 00:18:05.296 "data_offset": 2048, 00:18:05.296 "data_size": 63488 00:18:05.296 }, 00:18:05.296 { 00:18:05.296 "name": "pt2", 00:18:05.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.296 "is_configured": true, 00:18:05.296 "data_offset": 2048, 00:18:05.296 "data_size": 63488 00:18:05.296 }, 00:18:05.296 { 00:18:05.296 "name": "pt3", 00:18:05.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.296 "is_configured": true, 00:18:05.296 "data_offset": 2048, 00:18:05.296 "data_size": 63488 00:18:05.296 }, 00:18:05.296 { 00:18:05.296 "name": null, 00:18:05.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.296 "is_configured": false, 00:18:05.296 "data_offset": 2048, 00:18:05.296 "data_size": 63488 00:18:05.296 } 00:18:05.296 ] 00:18:05.296 }' 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.296 09:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.555 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:05.555 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:05.555 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:05.555 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:05.555 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.555 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.555 [2024-10-11 09:51:50.115249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:05.555 [2024-10-11 09:51:50.115581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.555 [2024-10-11 09:51:50.115614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:05.555 [2024-10-11 09:51:50.115624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.555 [2024-10-11 09:51:50.116183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.555 [2024-10-11 09:51:50.116216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:05.555 [2024-10-11 09:51:50.116308] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:05.555 [2024-10-11 09:51:50.116338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:05.555 [2024-10-11 09:51:50.116484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:05.555 [2024-10-11 09:51:50.116493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:05.556 [2024-10-11 09:51:50.116769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:05.556 [2024-10-11 09:51:50.124379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:05.556 [2024-10-11 09:51:50.124406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:05.556 [2024-10-11 09:51:50.124706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.556 pt4 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.556 "name": "raid_bdev1", 00:18:05.556 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:05.556 "strip_size_kb": 64, 00:18:05.556 "state": "online", 00:18:05.556 "raid_level": "raid5f", 00:18:05.556 "superblock": true, 00:18:05.556 "num_base_bdevs": 4, 00:18:05.556 "num_base_bdevs_discovered": 3, 00:18:05.556 "num_base_bdevs_operational": 3, 00:18:05.556 "base_bdevs_list": [ 00:18:05.556 { 00:18:05.556 "name": null, 00:18:05.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.556 "is_configured": false, 00:18:05.556 "data_offset": 2048, 00:18:05.556 "data_size": 63488 00:18:05.556 }, 00:18:05.556 { 00:18:05.556 "name": "pt2", 00:18:05.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.556 "is_configured": true, 00:18:05.556 "data_offset": 2048, 00:18:05.556 "data_size": 63488 00:18:05.556 }, 00:18:05.556 { 00:18:05.556 "name": "pt3", 00:18:05.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.556 "is_configured": true, 00:18:05.556 "data_offset": 2048, 00:18:05.556 "data_size": 63488 00:18:05.556 }, 00:18:05.556 { 00:18:05.556 "name": "pt4", 00:18:05.556 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:05.556 "is_configured": true, 00:18:05.556 "data_offset": 2048, 00:18:05.556 "data_size": 63488 00:18:05.556 } 00:18:05.556 ] 00:18:05.556 }' 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.556 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 [2024-10-11 09:51:50.624299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.125 [2024-10-11 09:51:50.624332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.125 [2024-10-11 09:51:50.624422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.125 [2024-10-11 09:51:50.624526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.125 [2024-10-11 09:51:50.624543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 [2024-10-11 09:51:50.700119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.125 [2024-10-11 09:51:50.700242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.125 [2024-10-11 09:51:50.700267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:06.125 [2024-10-11 09:51:50.700280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.125 [2024-10-11 09:51:50.702681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.125 [2024-10-11 09:51:50.702721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.125 [2024-10-11 09:51:50.702829] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:06.125 [2024-10-11 09:51:50.702883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.125 [2024-10-11 09:51:50.703010] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:06.125 [2024-10-11 09:51:50.703027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.125 [2024-10-11 09:51:50.703042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:06.125 [2024-10-11 09:51:50.703108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.125 [2024-10-11 09:51:50.703213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:06.125 pt1 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.125 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.126 "name": "raid_bdev1", 00:18:06.126 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:06.126 "strip_size_kb": 64, 00:18:06.126 "state": "configuring", 00:18:06.126 "raid_level": "raid5f", 00:18:06.126 "superblock": true, 00:18:06.126 "num_base_bdevs": 4, 00:18:06.126 "num_base_bdevs_discovered": 2, 00:18:06.126 "num_base_bdevs_operational": 3, 00:18:06.126 "base_bdevs_list": [ 00:18:06.126 { 00:18:06.126 "name": null, 00:18:06.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.126 "is_configured": false, 00:18:06.126 "data_offset": 2048, 00:18:06.126 "data_size": 63488 00:18:06.126 }, 00:18:06.126 { 00:18:06.126 "name": "pt2", 00:18:06.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.126 "is_configured": true, 00:18:06.126 "data_offset": 2048, 00:18:06.126 "data_size": 63488 00:18:06.126 }, 00:18:06.126 { 00:18:06.126 "name": "pt3", 00:18:06.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.126 "is_configured": true, 00:18:06.126 "data_offset": 2048, 00:18:06.126 "data_size": 63488 00:18:06.126 }, 00:18:06.126 { 00:18:06.126 "name": null, 00:18:06.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.126 "is_configured": false, 00:18:06.126 "data_offset": 2048, 00:18:06.126 "data_size": 63488 00:18:06.126 } 00:18:06.126 ] 00:18:06.126 }' 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.126 09:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 [2024-10-11 09:51:51.187403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:06.695 [2024-10-11 09:51:51.187545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.695 [2024-10-11 09:51:51.187595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:06.695 [2024-10-11 09:51:51.187630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.695 [2024-10-11 09:51:51.188250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.695 [2024-10-11 09:51:51.188323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:06.695 [2024-10-11 09:51:51.188477] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:06.695 [2024-10-11 09:51:51.188546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:06.695 [2024-10-11 09:51:51.188775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:06.695 [2024-10-11 09:51:51.188826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:06.695 [2024-10-11 09:51:51.189173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:06.695 [2024-10-11 09:51:51.197815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:06.695 [2024-10-11 09:51:51.197873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:06.695 [2024-10-11 09:51:51.198196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.695 pt4 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.695 "name": "raid_bdev1", 00:18:06.695 "uuid": "64d96e15-1f26-43c1-9588-5a1957f0d5e7", 00:18:06.695 "strip_size_kb": 64, 00:18:06.695 "state": "online", 00:18:06.695 "raid_level": "raid5f", 00:18:06.695 "superblock": true, 00:18:06.695 "num_base_bdevs": 4, 00:18:06.695 "num_base_bdevs_discovered": 3, 00:18:06.695 "num_base_bdevs_operational": 3, 00:18:06.695 "base_bdevs_list": [ 00:18:06.695 { 00:18:06.695 "name": null, 00:18:06.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.695 "is_configured": false, 00:18:06.695 "data_offset": 2048, 00:18:06.695 "data_size": 63488 00:18:06.695 }, 00:18:06.695 { 00:18:06.695 "name": "pt2", 00:18:06.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.695 "is_configured": true, 00:18:06.695 "data_offset": 2048, 00:18:06.695 "data_size": 63488 00:18:06.695 }, 00:18:06.695 { 00:18:06.695 "name": "pt3", 00:18:06.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.695 "is_configured": true, 00:18:06.695 "data_offset": 2048, 00:18:06.695 "data_size": 63488 00:18:06.695 }, 00:18:06.695 { 00:18:06.695 "name": "pt4", 00:18:06.695 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.695 "is_configured": true, 00:18:06.695 "data_offset": 2048, 00:18:06.695 "data_size": 63488 00:18:06.695 } 00:18:06.695 ] 00:18:06.695 }' 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.695 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.265 [2024-10-11 09:51:51.702131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 64d96e15-1f26-43c1-9588-5a1957f0d5e7 '!=' 64d96e15-1f26-43c1-9588-5a1957f0d5e7 ']' 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84692 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84692 ']' 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84692 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84692 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84692' 00:18:07.265 killing process with pid 84692 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84692 00:18:07.265 [2024-10-11 09:51:51.788485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.265 [2024-10-11 09:51:51.788595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.265 09:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84692 00:18:07.265 [2024-10-11 09:51:51.788678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.265 [2024-10-11 09:51:51.788691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:07.834 [2024-10-11 09:51:52.162040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.775 ************************************ 00:18:08.775 END TEST raid5f_superblock_test 00:18:08.775 09:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:08.775 00:18:08.775 real 0m8.608s 00:18:08.775 user 0m13.554s 00:18:08.775 sys 0m1.625s 00:18:08.775 09:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:08.775 09:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.775 ************************************ 00:18:08.775 09:51:53 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:08.775 09:51:53 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:08.775 09:51:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:08.775 09:51:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:08.775 09:51:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.775 ************************************ 00:18:08.775 START TEST raid5f_rebuild_test 00:18:08.775 ************************************ 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85177 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85177 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85177 ']' 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.775 09:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.036 [2024-10-11 09:51:53.412868] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:18:09.036 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:09.036 Zero copy mechanism will not be used. 00:18:09.036 [2024-10-11 09:51:53.413433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85177 ] 00:18:09.036 [2024-10-11 09:51:53.576550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.295 [2024-10-11 09:51:53.701774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.295 [2024-10-11 09:51:53.925262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.554 [2024-10-11 09:51:53.925401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.812 BaseBdev1_malloc 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.812 [2024-10-11 09:51:54.318994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:09.812 [2024-10-11 09:51:54.319061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.812 [2024-10-11 09:51:54.319082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:09.812 [2024-10-11 09:51:54.319093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.812 [2024-10-11 09:51:54.321340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.812 [2024-10-11 09:51:54.321415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.812 BaseBdev1 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:09.812 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 BaseBdev2_malloc 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 [2024-10-11 09:51:54.378429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:09.813 [2024-10-11 09:51:54.378494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.813 [2024-10-11 09:51:54.378513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:09.813 [2024-10-11 09:51:54.378527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.813 [2024-10-11 09:51:54.380875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.813 [2024-10-11 09:51:54.380959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:09.813 BaseBdev2 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.813 BaseBdev3_malloc 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.813 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.072 [2024-10-11 09:51:54.447571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:10.072 [2024-10-11 09:51:54.447634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.072 [2024-10-11 09:51:54.447656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:10.072 [2024-10-11 09:51:54.447668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.072 [2024-10-11 09:51:54.450051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.072 [2024-10-11 09:51:54.450136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:10.072 BaseBdev3 00:18:10.072 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 BaseBdev4_malloc 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 [2024-10-11 09:51:54.507613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:10.073 [2024-10-11 09:51:54.507692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.073 [2024-10-11 09:51:54.507718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:10.073 [2024-10-11 09:51:54.507731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.073 [2024-10-11 09:51:54.510038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.073 [2024-10-11 09:51:54.510081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:10.073 BaseBdev4 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 spare_malloc 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 spare_delay 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 [2024-10-11 09:51:54.579617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.073 [2024-10-11 09:51:54.579676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.073 [2024-10-11 09:51:54.579720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:10.073 [2024-10-11 09:51:54.579730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.073 [2024-10-11 09:51:54.581844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.073 [2024-10-11 09:51:54.581880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.073 spare 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 [2024-10-11 09:51:54.591648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.073 [2024-10-11 09:51:54.593726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.073 [2024-10-11 09:51:54.593827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.073 [2024-10-11 09:51:54.593890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:10.073 [2024-10-11 09:51:54.593995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.073 [2024-10-11 09:51:54.594009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:10.073 [2024-10-11 09:51:54.594308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:10.073 [2024-10-11 09:51:54.602425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.073 [2024-10-11 09:51:54.602482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.073 [2024-10-11 09:51:54.602725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.073 "name": "raid_bdev1", 00:18:10.073 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:10.073 "strip_size_kb": 64, 00:18:10.073 "state": "online", 00:18:10.073 "raid_level": "raid5f", 00:18:10.073 "superblock": false, 00:18:10.073 "num_base_bdevs": 4, 00:18:10.073 "num_base_bdevs_discovered": 4, 00:18:10.073 "num_base_bdevs_operational": 4, 00:18:10.073 "base_bdevs_list": [ 00:18:10.073 { 00:18:10.073 "name": "BaseBdev1", 00:18:10.073 "uuid": "bfc64fd6-1495-5f76-b355-64f11b1ed6d8", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 0, 00:18:10.073 "data_size": 65536 00:18:10.073 }, 00:18:10.073 { 00:18:10.073 "name": "BaseBdev2", 00:18:10.073 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 0, 00:18:10.073 "data_size": 65536 00:18:10.073 }, 00:18:10.073 { 00:18:10.073 "name": "BaseBdev3", 00:18:10.073 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 0, 00:18:10.073 "data_size": 65536 00:18:10.073 }, 00:18:10.073 { 00:18:10.073 "name": "BaseBdev4", 00:18:10.073 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:10.073 "is_configured": true, 00:18:10.073 "data_offset": 0, 00:18:10.073 "data_size": 65536 00:18:10.073 } 00:18:10.073 ] 00:18:10.073 }' 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.073 09:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:10.643 [2024-10-11 09:51:55.085730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.643 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:10.910 [2024-10-11 09:51:55.385063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:10.910 /dev/nbd0 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:10.910 1+0 records in 00:18:10.910 1+0 records out 00:18:10.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299409 s, 13.7 MB/s 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:10.910 09:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:11.510 512+0 records in 00:18:11.510 512+0 records out 00:18:11.510 100663296 bytes (101 MB, 96 MiB) copied, 0.586438 s, 172 MB/s 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.510 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:11.769 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:11.769 [2024-10-11 09:51:56.266154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.769 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 [2024-10-11 09:51:56.279409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.770 "name": "raid_bdev1", 00:18:11.770 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:11.770 "strip_size_kb": 64, 00:18:11.770 "state": "online", 00:18:11.770 "raid_level": "raid5f", 00:18:11.770 "superblock": false, 00:18:11.770 "num_base_bdevs": 4, 00:18:11.770 "num_base_bdevs_discovered": 3, 00:18:11.770 "num_base_bdevs_operational": 3, 00:18:11.770 "base_bdevs_list": [ 00:18:11.770 { 00:18:11.770 "name": null, 00:18:11.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.770 "is_configured": false, 00:18:11.770 "data_offset": 0, 00:18:11.770 "data_size": 65536 00:18:11.770 }, 00:18:11.770 { 00:18:11.770 "name": "BaseBdev2", 00:18:11.770 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:11.770 "is_configured": true, 00:18:11.770 "data_offset": 0, 00:18:11.770 "data_size": 65536 00:18:11.770 }, 00:18:11.770 { 00:18:11.770 "name": "BaseBdev3", 00:18:11.770 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:11.770 "is_configured": true, 00:18:11.770 "data_offset": 0, 00:18:11.770 "data_size": 65536 00:18:11.770 }, 00:18:11.770 { 00:18:11.770 "name": "BaseBdev4", 00:18:11.770 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:11.770 "is_configured": true, 00:18:11.770 "data_offset": 0, 00:18:11.770 "data_size": 65536 00:18:11.770 } 00:18:11.770 ] 00:18:11.770 }' 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.770 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.337 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.337 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.337 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.337 [2024-10-11 09:51:56.746641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.337 [2024-10-11 09:51:56.768154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:12.337 09:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.337 09:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:12.337 [2024-10-11 09:51:56.780095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.270 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.270 "name": "raid_bdev1", 00:18:13.270 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:13.270 "strip_size_kb": 64, 00:18:13.270 "state": "online", 00:18:13.270 "raid_level": "raid5f", 00:18:13.270 "superblock": false, 00:18:13.270 "num_base_bdevs": 4, 00:18:13.270 "num_base_bdevs_discovered": 4, 00:18:13.270 "num_base_bdevs_operational": 4, 00:18:13.270 "process": { 00:18:13.270 "type": "rebuild", 00:18:13.270 "target": "spare", 00:18:13.270 "progress": { 00:18:13.270 "blocks": 17280, 00:18:13.270 "percent": 8 00:18:13.270 } 00:18:13.270 }, 00:18:13.270 "base_bdevs_list": [ 00:18:13.270 { 00:18:13.270 "name": "spare", 00:18:13.270 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:13.270 "is_configured": true, 00:18:13.270 "data_offset": 0, 00:18:13.270 "data_size": 65536 00:18:13.270 }, 00:18:13.271 { 00:18:13.271 "name": "BaseBdev2", 00:18:13.271 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:13.271 "is_configured": true, 00:18:13.271 "data_offset": 0, 00:18:13.271 "data_size": 65536 00:18:13.271 }, 00:18:13.271 { 00:18:13.271 "name": "BaseBdev3", 00:18:13.271 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:13.271 "is_configured": true, 00:18:13.271 "data_offset": 0, 00:18:13.271 "data_size": 65536 00:18:13.271 }, 00:18:13.271 { 00:18:13.271 "name": "BaseBdev4", 00:18:13.271 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:13.271 "is_configured": true, 00:18:13.271 "data_offset": 0, 00:18:13.271 "data_size": 65536 00:18:13.271 } 00:18:13.271 ] 00:18:13.271 }' 00:18:13.271 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.271 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.271 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.528 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.528 09:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:13.528 09:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.528 09:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.528 [2024-10-11 09:51:57.908030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.528 [2024-10-11 09:51:57.989843] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.528 [2024-10-11 09:51:57.989926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.528 [2024-10-11 09:51:57.989946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.528 [2024-10-11 09:51:57.989956] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.528 "name": "raid_bdev1", 00:18:13.528 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:13.528 "strip_size_kb": 64, 00:18:13.528 "state": "online", 00:18:13.528 "raid_level": "raid5f", 00:18:13.528 "superblock": false, 00:18:13.528 "num_base_bdevs": 4, 00:18:13.528 "num_base_bdevs_discovered": 3, 00:18:13.528 "num_base_bdevs_operational": 3, 00:18:13.528 "base_bdevs_list": [ 00:18:13.528 { 00:18:13.528 "name": null, 00:18:13.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.528 "is_configured": false, 00:18:13.528 "data_offset": 0, 00:18:13.528 "data_size": 65536 00:18:13.528 }, 00:18:13.528 { 00:18:13.528 "name": "BaseBdev2", 00:18:13.528 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:13.528 "is_configured": true, 00:18:13.528 "data_offset": 0, 00:18:13.528 "data_size": 65536 00:18:13.528 }, 00:18:13.528 { 00:18:13.528 "name": "BaseBdev3", 00:18:13.528 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:13.528 "is_configured": true, 00:18:13.528 "data_offset": 0, 00:18:13.528 "data_size": 65536 00:18:13.528 }, 00:18:13.528 { 00:18:13.528 "name": "BaseBdev4", 00:18:13.528 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:13.528 "is_configured": true, 00:18:13.528 "data_offset": 0, 00:18:13.528 "data_size": 65536 00:18:13.528 } 00:18:13.528 ] 00:18:13.528 }' 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.528 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.786 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.045 "name": "raid_bdev1", 00:18:14.045 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:14.045 "strip_size_kb": 64, 00:18:14.045 "state": "online", 00:18:14.045 "raid_level": "raid5f", 00:18:14.045 "superblock": false, 00:18:14.045 "num_base_bdevs": 4, 00:18:14.045 "num_base_bdevs_discovered": 3, 00:18:14.045 "num_base_bdevs_operational": 3, 00:18:14.045 "base_bdevs_list": [ 00:18:14.045 { 00:18:14.045 "name": null, 00:18:14.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.045 "is_configured": false, 00:18:14.045 "data_offset": 0, 00:18:14.045 "data_size": 65536 00:18:14.045 }, 00:18:14.045 { 00:18:14.045 "name": "BaseBdev2", 00:18:14.045 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:14.045 "is_configured": true, 00:18:14.045 "data_offset": 0, 00:18:14.045 "data_size": 65536 00:18:14.045 }, 00:18:14.045 { 00:18:14.045 "name": "BaseBdev3", 00:18:14.045 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:14.045 "is_configured": true, 00:18:14.045 "data_offset": 0, 00:18:14.045 "data_size": 65536 00:18:14.045 }, 00:18:14.045 { 00:18:14.045 "name": "BaseBdev4", 00:18:14.045 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:14.045 "is_configured": true, 00:18:14.045 "data_offset": 0, 00:18:14.045 "data_size": 65536 00:18:14.045 } 00:18:14.045 ] 00:18:14.045 }' 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.045 [2024-10-11 09:51:58.547929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.045 [2024-10-11 09:51:58.564855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.045 09:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:14.045 [2024-10-11 09:51:58.575008] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.979 09:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.238 "name": "raid_bdev1", 00:18:15.238 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:15.238 "strip_size_kb": 64, 00:18:15.238 "state": "online", 00:18:15.238 "raid_level": "raid5f", 00:18:15.238 "superblock": false, 00:18:15.238 "num_base_bdevs": 4, 00:18:15.238 "num_base_bdevs_discovered": 4, 00:18:15.238 "num_base_bdevs_operational": 4, 00:18:15.238 "process": { 00:18:15.238 "type": "rebuild", 00:18:15.238 "target": "spare", 00:18:15.238 "progress": { 00:18:15.238 "blocks": 17280, 00:18:15.238 "percent": 8 00:18:15.238 } 00:18:15.238 }, 00:18:15.238 "base_bdevs_list": [ 00:18:15.238 { 00:18:15.238 "name": "spare", 00:18:15.238 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "name": "BaseBdev2", 00:18:15.238 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "name": "BaseBdev3", 00:18:15.238 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "name": "BaseBdev4", 00:18:15.238 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 } 00:18:15.238 ] 00:18:15.238 }' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=635 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.238 "name": "raid_bdev1", 00:18:15.238 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:15.238 "strip_size_kb": 64, 00:18:15.238 "state": "online", 00:18:15.238 "raid_level": "raid5f", 00:18:15.238 "superblock": false, 00:18:15.238 "num_base_bdevs": 4, 00:18:15.238 "num_base_bdevs_discovered": 4, 00:18:15.238 "num_base_bdevs_operational": 4, 00:18:15.238 "process": { 00:18:15.238 "type": "rebuild", 00:18:15.238 "target": "spare", 00:18:15.238 "progress": { 00:18:15.238 "blocks": 21120, 00:18:15.238 "percent": 10 00:18:15.238 } 00:18:15.238 }, 00:18:15.238 "base_bdevs_list": [ 00:18:15.238 { 00:18:15.238 "name": "spare", 00:18:15.238 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "name": "BaseBdev2", 00:18:15.238 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "name": "BaseBdev3", 00:18:15.238 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 }, 00:18:15.238 { 00:18:15.238 "name": "BaseBdev4", 00:18:15.238 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:15.238 "is_configured": true, 00:18:15.238 "data_offset": 0, 00:18:15.238 "data_size": 65536 00:18:15.238 } 00:18:15.238 ] 00:18:15.238 }' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.238 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.497 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.497 09:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.434 "name": "raid_bdev1", 00:18:16.434 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:16.434 "strip_size_kb": 64, 00:18:16.434 "state": "online", 00:18:16.434 "raid_level": "raid5f", 00:18:16.434 "superblock": false, 00:18:16.434 "num_base_bdevs": 4, 00:18:16.434 "num_base_bdevs_discovered": 4, 00:18:16.434 "num_base_bdevs_operational": 4, 00:18:16.434 "process": { 00:18:16.434 "type": "rebuild", 00:18:16.434 "target": "spare", 00:18:16.434 "progress": { 00:18:16.434 "blocks": 44160, 00:18:16.434 "percent": 22 00:18:16.434 } 00:18:16.434 }, 00:18:16.434 "base_bdevs_list": [ 00:18:16.434 { 00:18:16.434 "name": "spare", 00:18:16.434 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:16.434 "is_configured": true, 00:18:16.434 "data_offset": 0, 00:18:16.434 "data_size": 65536 00:18:16.434 }, 00:18:16.434 { 00:18:16.434 "name": "BaseBdev2", 00:18:16.434 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:16.434 "is_configured": true, 00:18:16.434 "data_offset": 0, 00:18:16.434 "data_size": 65536 00:18:16.434 }, 00:18:16.434 { 00:18:16.434 "name": "BaseBdev3", 00:18:16.434 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:16.434 "is_configured": true, 00:18:16.434 "data_offset": 0, 00:18:16.434 "data_size": 65536 00:18:16.434 }, 00:18:16.434 { 00:18:16.434 "name": "BaseBdev4", 00:18:16.434 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:16.434 "is_configured": true, 00:18:16.434 "data_offset": 0, 00:18:16.434 "data_size": 65536 00:18:16.434 } 00:18:16.434 ] 00:18:16.434 }' 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.434 09:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.434 09:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.434 09:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.811 "name": "raid_bdev1", 00:18:17.811 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:17.811 "strip_size_kb": 64, 00:18:17.811 "state": "online", 00:18:17.811 "raid_level": "raid5f", 00:18:17.811 "superblock": false, 00:18:17.811 "num_base_bdevs": 4, 00:18:17.811 "num_base_bdevs_discovered": 4, 00:18:17.811 "num_base_bdevs_operational": 4, 00:18:17.811 "process": { 00:18:17.811 "type": "rebuild", 00:18:17.811 "target": "spare", 00:18:17.811 "progress": { 00:18:17.811 "blocks": 65280, 00:18:17.811 "percent": 33 00:18:17.811 } 00:18:17.811 }, 00:18:17.811 "base_bdevs_list": [ 00:18:17.811 { 00:18:17.811 "name": "spare", 00:18:17.811 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:17.811 "is_configured": true, 00:18:17.811 "data_offset": 0, 00:18:17.811 "data_size": 65536 00:18:17.811 }, 00:18:17.811 { 00:18:17.811 "name": "BaseBdev2", 00:18:17.811 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:17.811 "is_configured": true, 00:18:17.811 "data_offset": 0, 00:18:17.811 "data_size": 65536 00:18:17.811 }, 00:18:17.811 { 00:18:17.811 "name": "BaseBdev3", 00:18:17.811 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:17.811 "is_configured": true, 00:18:17.811 "data_offset": 0, 00:18:17.811 "data_size": 65536 00:18:17.811 }, 00:18:17.811 { 00:18:17.811 "name": "BaseBdev4", 00:18:17.811 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:17.811 "is_configured": true, 00:18:17.811 "data_offset": 0, 00:18:17.811 "data_size": 65536 00:18:17.811 } 00:18:17.811 ] 00:18:17.811 }' 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.811 09:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.748 "name": "raid_bdev1", 00:18:18.748 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:18.748 "strip_size_kb": 64, 00:18:18.748 "state": "online", 00:18:18.748 "raid_level": "raid5f", 00:18:18.748 "superblock": false, 00:18:18.748 "num_base_bdevs": 4, 00:18:18.748 "num_base_bdevs_discovered": 4, 00:18:18.748 "num_base_bdevs_operational": 4, 00:18:18.748 "process": { 00:18:18.748 "type": "rebuild", 00:18:18.748 "target": "spare", 00:18:18.748 "progress": { 00:18:18.748 "blocks": 86400, 00:18:18.748 "percent": 43 00:18:18.748 } 00:18:18.748 }, 00:18:18.748 "base_bdevs_list": [ 00:18:18.748 { 00:18:18.748 "name": "spare", 00:18:18.748 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:18.748 "is_configured": true, 00:18:18.748 "data_offset": 0, 00:18:18.748 "data_size": 65536 00:18:18.748 }, 00:18:18.748 { 00:18:18.748 "name": "BaseBdev2", 00:18:18.748 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:18.748 "is_configured": true, 00:18:18.748 "data_offset": 0, 00:18:18.748 "data_size": 65536 00:18:18.748 }, 00:18:18.748 { 00:18:18.748 "name": "BaseBdev3", 00:18:18.748 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:18.748 "is_configured": true, 00:18:18.748 "data_offset": 0, 00:18:18.748 "data_size": 65536 00:18:18.748 }, 00:18:18.748 { 00:18:18.748 "name": "BaseBdev4", 00:18:18.748 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:18.748 "is_configured": true, 00:18:18.748 "data_offset": 0, 00:18:18.748 "data_size": 65536 00:18:18.748 } 00:18:18.748 ] 00:18:18.748 }' 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.748 09:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.686 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.686 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.686 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.686 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.686 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.686 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.687 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.687 09:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.687 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.687 09:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.687 09:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.946 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.946 "name": "raid_bdev1", 00:18:19.946 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:19.946 "strip_size_kb": 64, 00:18:19.946 "state": "online", 00:18:19.946 "raid_level": "raid5f", 00:18:19.946 "superblock": false, 00:18:19.946 "num_base_bdevs": 4, 00:18:19.946 "num_base_bdevs_discovered": 4, 00:18:19.946 "num_base_bdevs_operational": 4, 00:18:19.946 "process": { 00:18:19.946 "type": "rebuild", 00:18:19.946 "target": "spare", 00:18:19.946 "progress": { 00:18:19.946 "blocks": 107520, 00:18:19.946 "percent": 54 00:18:19.946 } 00:18:19.946 }, 00:18:19.946 "base_bdevs_list": [ 00:18:19.946 { 00:18:19.946 "name": "spare", 00:18:19.946 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:19.946 "is_configured": true, 00:18:19.946 "data_offset": 0, 00:18:19.946 "data_size": 65536 00:18:19.946 }, 00:18:19.946 { 00:18:19.946 "name": "BaseBdev2", 00:18:19.946 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:19.946 "is_configured": true, 00:18:19.946 "data_offset": 0, 00:18:19.946 "data_size": 65536 00:18:19.946 }, 00:18:19.946 { 00:18:19.946 "name": "BaseBdev3", 00:18:19.946 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:19.946 "is_configured": true, 00:18:19.946 "data_offset": 0, 00:18:19.946 "data_size": 65536 00:18:19.946 }, 00:18:19.946 { 00:18:19.946 "name": "BaseBdev4", 00:18:19.946 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:19.946 "is_configured": true, 00:18:19.946 "data_offset": 0, 00:18:19.946 "data_size": 65536 00:18:19.946 } 00:18:19.946 ] 00:18:19.946 }' 00:18:19.946 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.946 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.946 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.946 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.946 09:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.882 "name": "raid_bdev1", 00:18:20.882 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:20.882 "strip_size_kb": 64, 00:18:20.882 "state": "online", 00:18:20.882 "raid_level": "raid5f", 00:18:20.882 "superblock": false, 00:18:20.882 "num_base_bdevs": 4, 00:18:20.882 "num_base_bdevs_discovered": 4, 00:18:20.882 "num_base_bdevs_operational": 4, 00:18:20.882 "process": { 00:18:20.882 "type": "rebuild", 00:18:20.882 "target": "spare", 00:18:20.882 "progress": { 00:18:20.882 "blocks": 130560, 00:18:20.882 "percent": 66 00:18:20.882 } 00:18:20.882 }, 00:18:20.882 "base_bdevs_list": [ 00:18:20.882 { 00:18:20.882 "name": "spare", 00:18:20.882 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:20.882 "is_configured": true, 00:18:20.882 "data_offset": 0, 00:18:20.882 "data_size": 65536 00:18:20.882 }, 00:18:20.882 { 00:18:20.882 "name": "BaseBdev2", 00:18:20.882 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:20.882 "is_configured": true, 00:18:20.882 "data_offset": 0, 00:18:20.882 "data_size": 65536 00:18:20.882 }, 00:18:20.882 { 00:18:20.882 "name": "BaseBdev3", 00:18:20.882 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:20.882 "is_configured": true, 00:18:20.882 "data_offset": 0, 00:18:20.882 "data_size": 65536 00:18:20.882 }, 00:18:20.882 { 00:18:20.882 "name": "BaseBdev4", 00:18:20.882 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:20.882 "is_configured": true, 00:18:20.882 "data_offset": 0, 00:18:20.882 "data_size": 65536 00:18:20.882 } 00:18:20.882 ] 00:18:20.882 }' 00:18:20.882 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.141 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.141 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.141 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.141 09:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.079 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.079 "name": "raid_bdev1", 00:18:22.079 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:22.079 "strip_size_kb": 64, 00:18:22.079 "state": "online", 00:18:22.079 "raid_level": "raid5f", 00:18:22.079 "superblock": false, 00:18:22.079 "num_base_bdevs": 4, 00:18:22.079 "num_base_bdevs_discovered": 4, 00:18:22.079 "num_base_bdevs_operational": 4, 00:18:22.079 "process": { 00:18:22.079 "type": "rebuild", 00:18:22.079 "target": "spare", 00:18:22.079 "progress": { 00:18:22.079 "blocks": 151680, 00:18:22.079 "percent": 77 00:18:22.079 } 00:18:22.079 }, 00:18:22.079 "base_bdevs_list": [ 00:18:22.079 { 00:18:22.080 "name": "spare", 00:18:22.080 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:22.080 "is_configured": true, 00:18:22.080 "data_offset": 0, 00:18:22.080 "data_size": 65536 00:18:22.080 }, 00:18:22.080 { 00:18:22.080 "name": "BaseBdev2", 00:18:22.080 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:22.080 "is_configured": true, 00:18:22.080 "data_offset": 0, 00:18:22.080 "data_size": 65536 00:18:22.080 }, 00:18:22.080 { 00:18:22.080 "name": "BaseBdev3", 00:18:22.080 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:22.080 "is_configured": true, 00:18:22.080 "data_offset": 0, 00:18:22.080 "data_size": 65536 00:18:22.080 }, 00:18:22.080 { 00:18:22.080 "name": "BaseBdev4", 00:18:22.080 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:22.080 "is_configured": true, 00:18:22.080 "data_offset": 0, 00:18:22.080 "data_size": 65536 00:18:22.080 } 00:18:22.080 ] 00:18:22.080 }' 00:18:22.080 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.080 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.080 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.339 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.339 09:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.276 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.276 "name": "raid_bdev1", 00:18:23.276 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:23.276 "strip_size_kb": 64, 00:18:23.276 "state": "online", 00:18:23.276 "raid_level": "raid5f", 00:18:23.276 "superblock": false, 00:18:23.276 "num_base_bdevs": 4, 00:18:23.276 "num_base_bdevs_discovered": 4, 00:18:23.276 "num_base_bdevs_operational": 4, 00:18:23.276 "process": { 00:18:23.276 "type": "rebuild", 00:18:23.276 "target": "spare", 00:18:23.276 "progress": { 00:18:23.276 "blocks": 174720, 00:18:23.276 "percent": 88 00:18:23.276 } 00:18:23.276 }, 00:18:23.276 "base_bdevs_list": [ 00:18:23.276 { 00:18:23.276 "name": "spare", 00:18:23.276 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:23.276 "is_configured": true, 00:18:23.276 "data_offset": 0, 00:18:23.276 "data_size": 65536 00:18:23.276 }, 00:18:23.276 { 00:18:23.276 "name": "BaseBdev2", 00:18:23.276 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:23.276 "is_configured": true, 00:18:23.276 "data_offset": 0, 00:18:23.276 "data_size": 65536 00:18:23.276 }, 00:18:23.276 { 00:18:23.276 "name": "BaseBdev3", 00:18:23.276 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:23.276 "is_configured": true, 00:18:23.276 "data_offset": 0, 00:18:23.276 "data_size": 65536 00:18:23.276 }, 00:18:23.276 { 00:18:23.276 "name": "BaseBdev4", 00:18:23.276 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:23.276 "is_configured": true, 00:18:23.276 "data_offset": 0, 00:18:23.276 "data_size": 65536 00:18:23.277 } 00:18:23.277 ] 00:18:23.277 }' 00:18:23.277 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.277 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.277 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.277 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.277 09:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.653 "name": "raid_bdev1", 00:18:24.653 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:24.653 "strip_size_kb": 64, 00:18:24.653 "state": "online", 00:18:24.653 "raid_level": "raid5f", 00:18:24.653 "superblock": false, 00:18:24.653 "num_base_bdevs": 4, 00:18:24.653 "num_base_bdevs_discovered": 4, 00:18:24.653 "num_base_bdevs_operational": 4, 00:18:24.653 "process": { 00:18:24.653 "type": "rebuild", 00:18:24.653 "target": "spare", 00:18:24.653 "progress": { 00:18:24.653 "blocks": 195840, 00:18:24.653 "percent": 99 00:18:24.653 } 00:18:24.653 }, 00:18:24.653 "base_bdevs_list": [ 00:18:24.653 { 00:18:24.653 "name": "spare", 00:18:24.653 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:24.653 "is_configured": true, 00:18:24.653 "data_offset": 0, 00:18:24.653 "data_size": 65536 00:18:24.653 }, 00:18:24.653 { 00:18:24.653 "name": "BaseBdev2", 00:18:24.653 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:24.653 "is_configured": true, 00:18:24.653 "data_offset": 0, 00:18:24.653 "data_size": 65536 00:18:24.653 }, 00:18:24.653 { 00:18:24.653 "name": "BaseBdev3", 00:18:24.653 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:24.653 "is_configured": true, 00:18:24.653 "data_offset": 0, 00:18:24.653 "data_size": 65536 00:18:24.653 }, 00:18:24.653 { 00:18:24.653 "name": "BaseBdev4", 00:18:24.653 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:24.653 "is_configured": true, 00:18:24.653 "data_offset": 0, 00:18:24.653 "data_size": 65536 00:18:24.653 } 00:18:24.653 ] 00:18:24.653 }' 00:18:24.653 [2024-10-11 09:52:08.953959] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:24.653 [2024-10-11 09:52:08.954044] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:24.653 [2024-10-11 09:52:08.954116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.653 09:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.653 09:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.653 09:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.590 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.590 "name": "raid_bdev1", 00:18:25.590 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:25.590 "strip_size_kb": 64, 00:18:25.590 "state": "online", 00:18:25.590 "raid_level": "raid5f", 00:18:25.590 "superblock": false, 00:18:25.590 "num_base_bdevs": 4, 00:18:25.590 "num_base_bdevs_discovered": 4, 00:18:25.590 "num_base_bdevs_operational": 4, 00:18:25.590 "base_bdevs_list": [ 00:18:25.590 { 00:18:25.590 "name": "spare", 00:18:25.590 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:25.590 "is_configured": true, 00:18:25.590 "data_offset": 0, 00:18:25.590 "data_size": 65536 00:18:25.590 }, 00:18:25.590 { 00:18:25.590 "name": "BaseBdev2", 00:18:25.590 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:25.590 "is_configured": true, 00:18:25.590 "data_offset": 0, 00:18:25.590 "data_size": 65536 00:18:25.590 }, 00:18:25.590 { 00:18:25.590 "name": "BaseBdev3", 00:18:25.590 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:25.590 "is_configured": true, 00:18:25.590 "data_offset": 0, 00:18:25.590 "data_size": 65536 00:18:25.590 }, 00:18:25.590 { 00:18:25.590 "name": "BaseBdev4", 00:18:25.590 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:25.590 "is_configured": true, 00:18:25.590 "data_offset": 0, 00:18:25.590 "data_size": 65536 00:18:25.590 } 00:18:25.590 ] 00:18:25.590 }' 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.591 "name": "raid_bdev1", 00:18:25.591 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:25.591 "strip_size_kb": 64, 00:18:25.591 "state": "online", 00:18:25.591 "raid_level": "raid5f", 00:18:25.591 "superblock": false, 00:18:25.591 "num_base_bdevs": 4, 00:18:25.591 "num_base_bdevs_discovered": 4, 00:18:25.591 "num_base_bdevs_operational": 4, 00:18:25.591 "base_bdevs_list": [ 00:18:25.591 { 00:18:25.591 "name": "spare", 00:18:25.591 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 0, 00:18:25.591 "data_size": 65536 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "name": "BaseBdev2", 00:18:25.591 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 0, 00:18:25.591 "data_size": 65536 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "name": "BaseBdev3", 00:18:25.591 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 0, 00:18:25.591 "data_size": 65536 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "name": "BaseBdev4", 00:18:25.591 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 0, 00:18:25.591 "data_size": 65536 00:18:25.591 } 00:18:25.591 ] 00:18:25.591 }' 00:18:25.591 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.850 "name": "raid_bdev1", 00:18:25.850 "uuid": "51ec5aad-8313-4afa-9cb1-de92633e3be6", 00:18:25.850 "strip_size_kb": 64, 00:18:25.850 "state": "online", 00:18:25.850 "raid_level": "raid5f", 00:18:25.850 "superblock": false, 00:18:25.850 "num_base_bdevs": 4, 00:18:25.850 "num_base_bdevs_discovered": 4, 00:18:25.850 "num_base_bdevs_operational": 4, 00:18:25.850 "base_bdevs_list": [ 00:18:25.850 { 00:18:25.850 "name": "spare", 00:18:25.850 "uuid": "0862f2d8-9177-527d-b453-b9bf42e7119d", 00:18:25.850 "is_configured": true, 00:18:25.850 "data_offset": 0, 00:18:25.850 "data_size": 65536 00:18:25.850 }, 00:18:25.850 { 00:18:25.850 "name": "BaseBdev2", 00:18:25.850 "uuid": "46a77660-0a36-52a7-ad64-ac55bfe0454a", 00:18:25.850 "is_configured": true, 00:18:25.850 "data_offset": 0, 00:18:25.850 "data_size": 65536 00:18:25.850 }, 00:18:25.850 { 00:18:25.850 "name": "BaseBdev3", 00:18:25.850 "uuid": "b2562d63-8958-572a-9048-d001f2999630", 00:18:25.850 "is_configured": true, 00:18:25.850 "data_offset": 0, 00:18:25.850 "data_size": 65536 00:18:25.850 }, 00:18:25.850 { 00:18:25.850 "name": "BaseBdev4", 00:18:25.850 "uuid": "5f4d5b57-cd4f-5982-a419-cc06e9d4b1ae", 00:18:25.850 "is_configured": true, 00:18:25.850 "data_offset": 0, 00:18:25.850 "data_size": 65536 00:18:25.850 } 00:18:25.850 ] 00:18:25.850 }' 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.850 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.417 [2024-10-11 09:52:10.755928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.417 [2024-10-11 09:52:10.755972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.417 [2024-10-11 09:52:10.756087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.417 [2024-10-11 09:52:10.756214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.417 [2024-10-11 09:52:10.756229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.417 09:52:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:26.676 /dev/nbd0 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.676 1+0 records in 00:18:26.676 1+0 records out 00:18:26.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417965 s, 9.8 MB/s 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.676 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:26.935 /dev/nbd1 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:26.935 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.935 1+0 records in 00:18:26.935 1+0 records out 00:18:26.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028898 s, 14.2 MB/s 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.936 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.194 09:52:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85177 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85177 ']' 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85177 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85177 00:18:27.453 killing process with pid 85177 00:18:27.453 Received shutdown signal, test time was about 60.000000 seconds 00:18:27.453 00:18:27.453 Latency(us) 00:18:27.453 [2024-10-11T09:52:12.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.453 [2024-10-11T09:52:12.085Z] =================================================================================================================== 00:18:27.453 [2024-10-11T09:52:12.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85177' 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 85177 00:18:27.453 [2024-10-11 09:52:12.052890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.453 09:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 85177 00:18:28.020 [2024-10-11 09:52:12.530548] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.397 00:18:29.397 real 0m20.276s 00:18:29.397 user 0m24.159s 00:18:29.397 sys 0m2.337s 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.397 ************************************ 00:18:29.397 END TEST raid5f_rebuild_test 00:18:29.397 ************************************ 00:18:29.397 09:52:13 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:29.397 09:52:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:29.397 09:52:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.397 09:52:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.397 ************************************ 00:18:29.397 START TEST raid5f_rebuild_test_sb 00:18:29.397 ************************************ 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.397 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85698 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85698 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85698 ']' 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.398 09:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.398 Zero copy mechanism will not be used. 00:18:29.398 [2024-10-11 09:52:13.749180] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:18:29.398 [2024-10-11 09:52:13.749304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85698 ] 00:18:29.398 [2024-10-11 09:52:13.911500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.656 [2024-10-11 09:52:14.033114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.656 [2024-10-11 09:52:14.253152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.656 [2024-10-11 09:52:14.253220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 BaseBdev1_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 [2024-10-11 09:52:14.706990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.225 [2024-10-11 09:52:14.707076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.225 [2024-10-11 09:52:14.707102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.225 [2024-10-11 09:52:14.707114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.225 [2024-10-11 09:52:14.709441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.225 [2024-10-11 09:52:14.709484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.225 BaseBdev1 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 BaseBdev2_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 [2024-10-11 09:52:14.763703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:30.225 [2024-10-11 09:52:14.763773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.225 [2024-10-11 09:52:14.763795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.225 [2024-10-11 09:52:14.763806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.225 [2024-10-11 09:52:14.765895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.225 [2024-10-11 09:52:14.765931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.225 BaseBdev2 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 BaseBdev3_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 [2024-10-11 09:52:14.830256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:30.225 [2024-10-11 09:52:14.830316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.225 [2024-10-11 09:52:14.830341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:30.225 [2024-10-11 09:52:14.830352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.225 [2024-10-11 09:52:14.832538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.225 [2024-10-11 09:52:14.832581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:30.225 BaseBdev3 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.225 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.485 BaseBdev4_malloc 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.485 [2024-10-11 09:52:14.880938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:30.485 [2024-10-11 09:52:14.880998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.485 [2024-10-11 09:52:14.881019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:30.485 [2024-10-11 09:52:14.881029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.485 [2024-10-11 09:52:14.883126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.485 [2024-10-11 09:52:14.883162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:30.485 BaseBdev4 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.485 spare_malloc 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.485 spare_delay 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.485 [2024-10-11 09:52:14.949162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.485 [2024-10-11 09:52:14.949232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.485 [2024-10-11 09:52:14.949259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:30.485 [2024-10-11 09:52:14.949271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.485 [2024-10-11 09:52:14.951627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.485 [2024-10-11 09:52:14.951672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.485 spare 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.485 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.486 [2024-10-11 09:52:14.961180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.486 [2024-10-11 09:52:14.963351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.486 [2024-10-11 09:52:14.963429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.486 [2024-10-11 09:52:14.963489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:30.486 [2024-10-11 09:52:14.963763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:30.486 [2024-10-11 09:52:14.963790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:30.486 [2024-10-11 09:52:14.964104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:30.486 [2024-10-11 09:52:14.973788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:30.486 [2024-10-11 09:52:14.973816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:30.486 [2024-10-11 09:52:14.974093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.486 09:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.486 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.486 "name": "raid_bdev1", 00:18:30.486 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:30.486 "strip_size_kb": 64, 00:18:30.486 "state": "online", 00:18:30.486 "raid_level": "raid5f", 00:18:30.486 "superblock": true, 00:18:30.486 "num_base_bdevs": 4, 00:18:30.486 "num_base_bdevs_discovered": 4, 00:18:30.486 "num_base_bdevs_operational": 4, 00:18:30.486 "base_bdevs_list": [ 00:18:30.486 { 00:18:30.486 "name": "BaseBdev1", 00:18:30.486 "uuid": "6381fa90-7c0c-5fab-97cd-69e8a04a2d4b", 00:18:30.486 "is_configured": true, 00:18:30.486 "data_offset": 2048, 00:18:30.486 "data_size": 63488 00:18:30.486 }, 00:18:30.486 { 00:18:30.486 "name": "BaseBdev2", 00:18:30.486 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:30.486 "is_configured": true, 00:18:30.486 "data_offset": 2048, 00:18:30.486 "data_size": 63488 00:18:30.486 }, 00:18:30.486 { 00:18:30.486 "name": "BaseBdev3", 00:18:30.486 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:30.486 "is_configured": true, 00:18:30.486 "data_offset": 2048, 00:18:30.486 "data_size": 63488 00:18:30.486 }, 00:18:30.486 { 00:18:30.486 "name": "BaseBdev4", 00:18:30.486 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:30.486 "is_configured": true, 00:18:30.486 "data_offset": 2048, 00:18:30.486 "data_size": 63488 00:18:30.486 } 00:18:30.486 ] 00:18:30.486 }' 00:18:30.486 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.486 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.052 [2024-10-11 09:52:15.442037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:31.052 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.053 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:31.311 [2024-10-11 09:52:15.721383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:31.311 /dev/nbd0 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:31.311 1+0 records in 00:18:31.311 1+0 records out 00:18:31.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375546 s, 10.9 MB/s 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:31.311 09:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:31.878 496+0 records in 00:18:31.878 496+0 records out 00:18:31.878 97517568 bytes (98 MB, 93 MiB) copied, 0.567895 s, 172 MB/s 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:31.878 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:32.136 [2024-10-11 09:52:16.636214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.136 [2024-10-11 09:52:16.666466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.136 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.137 "name": "raid_bdev1", 00:18:32.137 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:32.137 "strip_size_kb": 64, 00:18:32.137 "state": "online", 00:18:32.137 "raid_level": "raid5f", 00:18:32.137 "superblock": true, 00:18:32.137 "num_base_bdevs": 4, 00:18:32.137 "num_base_bdevs_discovered": 3, 00:18:32.137 "num_base_bdevs_operational": 3, 00:18:32.137 "base_bdevs_list": [ 00:18:32.137 { 00:18:32.137 "name": null, 00:18:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.137 "is_configured": false, 00:18:32.137 "data_offset": 0, 00:18:32.137 "data_size": 63488 00:18:32.137 }, 00:18:32.137 { 00:18:32.137 "name": "BaseBdev2", 00:18:32.137 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:32.137 "is_configured": true, 00:18:32.137 "data_offset": 2048, 00:18:32.137 "data_size": 63488 00:18:32.137 }, 00:18:32.137 { 00:18:32.137 "name": "BaseBdev3", 00:18:32.137 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:32.137 "is_configured": true, 00:18:32.137 "data_offset": 2048, 00:18:32.137 "data_size": 63488 00:18:32.137 }, 00:18:32.137 { 00:18:32.137 "name": "BaseBdev4", 00:18:32.137 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:32.137 "is_configured": true, 00:18:32.137 "data_offset": 2048, 00:18:32.137 "data_size": 63488 00:18:32.137 } 00:18:32.137 ] 00:18:32.137 }' 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.137 09:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.711 09:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.711 09:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.711 09:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.711 [2024-10-11 09:52:17.129764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.711 [2024-10-11 09:52:17.152256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:32.711 09:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.711 09:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:32.711 [2024-10-11 09:52:17.165508] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.646 "name": "raid_bdev1", 00:18:33.646 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:33.646 "strip_size_kb": 64, 00:18:33.646 "state": "online", 00:18:33.646 "raid_level": "raid5f", 00:18:33.646 "superblock": true, 00:18:33.646 "num_base_bdevs": 4, 00:18:33.646 "num_base_bdevs_discovered": 4, 00:18:33.646 "num_base_bdevs_operational": 4, 00:18:33.646 "process": { 00:18:33.646 "type": "rebuild", 00:18:33.646 "target": "spare", 00:18:33.646 "progress": { 00:18:33.646 "blocks": 17280, 00:18:33.646 "percent": 9 00:18:33.646 } 00:18:33.646 }, 00:18:33.646 "base_bdevs_list": [ 00:18:33.646 { 00:18:33.646 "name": "spare", 00:18:33.646 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:33.646 "is_configured": true, 00:18:33.646 "data_offset": 2048, 00:18:33.646 "data_size": 63488 00:18:33.646 }, 00:18:33.646 { 00:18:33.646 "name": "BaseBdev2", 00:18:33.646 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:33.646 "is_configured": true, 00:18:33.646 "data_offset": 2048, 00:18:33.646 "data_size": 63488 00:18:33.646 }, 00:18:33.646 { 00:18:33.646 "name": "BaseBdev3", 00:18:33.646 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:33.646 "is_configured": true, 00:18:33.646 "data_offset": 2048, 00:18:33.646 "data_size": 63488 00:18:33.646 }, 00:18:33.646 { 00:18:33.646 "name": "BaseBdev4", 00:18:33.646 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:33.646 "is_configured": true, 00:18:33.646 "data_offset": 2048, 00:18:33.646 "data_size": 63488 00:18:33.646 } 00:18:33.646 ] 00:18:33.646 }' 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.646 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.904 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.905 [2024-10-11 09:52:18.317361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.905 [2024-10-11 09:52:18.375276] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.905 [2024-10-11 09:52:18.375406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.905 [2024-10-11 09:52:18.375431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.905 [2024-10-11 09:52:18.375444] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.905 "name": "raid_bdev1", 00:18:33.905 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:33.905 "strip_size_kb": 64, 00:18:33.905 "state": "online", 00:18:33.905 "raid_level": "raid5f", 00:18:33.905 "superblock": true, 00:18:33.905 "num_base_bdevs": 4, 00:18:33.905 "num_base_bdevs_discovered": 3, 00:18:33.905 "num_base_bdevs_operational": 3, 00:18:33.905 "base_bdevs_list": [ 00:18:33.905 { 00:18:33.905 "name": null, 00:18:33.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.905 "is_configured": false, 00:18:33.905 "data_offset": 0, 00:18:33.905 "data_size": 63488 00:18:33.905 }, 00:18:33.905 { 00:18:33.905 "name": "BaseBdev2", 00:18:33.905 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:33.905 "is_configured": true, 00:18:33.905 "data_offset": 2048, 00:18:33.905 "data_size": 63488 00:18:33.905 }, 00:18:33.905 { 00:18:33.905 "name": "BaseBdev3", 00:18:33.905 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:33.905 "is_configured": true, 00:18:33.905 "data_offset": 2048, 00:18:33.905 "data_size": 63488 00:18:33.905 }, 00:18:33.905 { 00:18:33.905 "name": "BaseBdev4", 00:18:33.905 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:33.905 "is_configured": true, 00:18:33.905 "data_offset": 2048, 00:18:33.905 "data_size": 63488 00:18:33.905 } 00:18:33.905 ] 00:18:33.905 }' 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.905 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.471 "name": "raid_bdev1", 00:18:34.471 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:34.471 "strip_size_kb": 64, 00:18:34.471 "state": "online", 00:18:34.471 "raid_level": "raid5f", 00:18:34.471 "superblock": true, 00:18:34.471 "num_base_bdevs": 4, 00:18:34.471 "num_base_bdevs_discovered": 3, 00:18:34.471 "num_base_bdevs_operational": 3, 00:18:34.471 "base_bdevs_list": [ 00:18:34.471 { 00:18:34.471 "name": null, 00:18:34.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.471 "is_configured": false, 00:18:34.471 "data_offset": 0, 00:18:34.471 "data_size": 63488 00:18:34.471 }, 00:18:34.471 { 00:18:34.471 "name": "BaseBdev2", 00:18:34.471 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:34.471 "is_configured": true, 00:18:34.471 "data_offset": 2048, 00:18:34.471 "data_size": 63488 00:18:34.471 }, 00:18:34.471 { 00:18:34.471 "name": "BaseBdev3", 00:18:34.471 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:34.471 "is_configured": true, 00:18:34.471 "data_offset": 2048, 00:18:34.471 "data_size": 63488 00:18:34.471 }, 00:18:34.471 { 00:18:34.471 "name": "BaseBdev4", 00:18:34.471 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:34.471 "is_configured": true, 00:18:34.471 "data_offset": 2048, 00:18:34.471 "data_size": 63488 00:18:34.471 } 00:18:34.471 ] 00:18:34.471 }' 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.471 [2024-10-11 09:52:18.931979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.471 [2024-10-11 09:52:18.951872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.471 09:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:34.471 [2024-10-11 09:52:18.963736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.405 09:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.405 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.405 "name": "raid_bdev1", 00:18:35.405 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:35.405 "strip_size_kb": 64, 00:18:35.405 "state": "online", 00:18:35.405 "raid_level": "raid5f", 00:18:35.405 "superblock": true, 00:18:35.405 "num_base_bdevs": 4, 00:18:35.405 "num_base_bdevs_discovered": 4, 00:18:35.405 "num_base_bdevs_operational": 4, 00:18:35.405 "process": { 00:18:35.405 "type": "rebuild", 00:18:35.405 "target": "spare", 00:18:35.405 "progress": { 00:18:35.405 "blocks": 17280, 00:18:35.405 "percent": 9 00:18:35.405 } 00:18:35.405 }, 00:18:35.405 "base_bdevs_list": [ 00:18:35.405 { 00:18:35.405 "name": "spare", 00:18:35.405 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:35.405 "is_configured": true, 00:18:35.405 "data_offset": 2048, 00:18:35.405 "data_size": 63488 00:18:35.405 }, 00:18:35.405 { 00:18:35.405 "name": "BaseBdev2", 00:18:35.405 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:35.405 "is_configured": true, 00:18:35.405 "data_offset": 2048, 00:18:35.405 "data_size": 63488 00:18:35.405 }, 00:18:35.405 { 00:18:35.405 "name": "BaseBdev3", 00:18:35.405 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:35.405 "is_configured": true, 00:18:35.405 "data_offset": 2048, 00:18:35.405 "data_size": 63488 00:18:35.405 }, 00:18:35.405 { 00:18:35.405 "name": "BaseBdev4", 00:18:35.405 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:35.405 "is_configured": true, 00:18:35.405 "data_offset": 2048, 00:18:35.405 "data_size": 63488 00:18:35.405 } 00:18:35.405 ] 00:18:35.405 }' 00:18:35.405 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.662 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.662 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:35.663 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=656 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.663 "name": "raid_bdev1", 00:18:35.663 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:35.663 "strip_size_kb": 64, 00:18:35.663 "state": "online", 00:18:35.663 "raid_level": "raid5f", 00:18:35.663 "superblock": true, 00:18:35.663 "num_base_bdevs": 4, 00:18:35.663 "num_base_bdevs_discovered": 4, 00:18:35.663 "num_base_bdevs_operational": 4, 00:18:35.663 "process": { 00:18:35.663 "type": "rebuild", 00:18:35.663 "target": "spare", 00:18:35.663 "progress": { 00:18:35.663 "blocks": 21120, 00:18:35.663 "percent": 11 00:18:35.663 } 00:18:35.663 }, 00:18:35.663 "base_bdevs_list": [ 00:18:35.663 { 00:18:35.663 "name": "spare", 00:18:35.663 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:35.663 "is_configured": true, 00:18:35.663 "data_offset": 2048, 00:18:35.663 "data_size": 63488 00:18:35.663 }, 00:18:35.663 { 00:18:35.663 "name": "BaseBdev2", 00:18:35.663 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:35.663 "is_configured": true, 00:18:35.663 "data_offset": 2048, 00:18:35.663 "data_size": 63488 00:18:35.663 }, 00:18:35.663 { 00:18:35.663 "name": "BaseBdev3", 00:18:35.663 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:35.663 "is_configured": true, 00:18:35.663 "data_offset": 2048, 00:18:35.663 "data_size": 63488 00:18:35.663 }, 00:18:35.663 { 00:18:35.663 "name": "BaseBdev4", 00:18:35.663 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:35.663 "is_configured": true, 00:18:35.663 "data_offset": 2048, 00:18:35.663 "data_size": 63488 00:18:35.663 } 00:18:35.663 ] 00:18:35.663 }' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.663 09:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.038 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.038 "name": "raid_bdev1", 00:18:37.038 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:37.038 "strip_size_kb": 64, 00:18:37.038 "state": "online", 00:18:37.038 "raid_level": "raid5f", 00:18:37.038 "superblock": true, 00:18:37.038 "num_base_bdevs": 4, 00:18:37.038 "num_base_bdevs_discovered": 4, 00:18:37.038 "num_base_bdevs_operational": 4, 00:18:37.038 "process": { 00:18:37.038 "type": "rebuild", 00:18:37.038 "target": "spare", 00:18:37.038 "progress": { 00:18:37.038 "blocks": 42240, 00:18:37.038 "percent": 22 00:18:37.038 } 00:18:37.038 }, 00:18:37.038 "base_bdevs_list": [ 00:18:37.038 { 00:18:37.038 "name": "spare", 00:18:37.038 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:37.038 "is_configured": true, 00:18:37.038 "data_offset": 2048, 00:18:37.038 "data_size": 63488 00:18:37.038 }, 00:18:37.038 { 00:18:37.038 "name": "BaseBdev2", 00:18:37.038 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:37.038 "is_configured": true, 00:18:37.038 "data_offset": 2048, 00:18:37.038 "data_size": 63488 00:18:37.038 }, 00:18:37.038 { 00:18:37.038 "name": "BaseBdev3", 00:18:37.038 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:37.038 "is_configured": true, 00:18:37.038 "data_offset": 2048, 00:18:37.038 "data_size": 63488 00:18:37.038 }, 00:18:37.038 { 00:18:37.038 "name": "BaseBdev4", 00:18:37.038 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:37.039 "is_configured": true, 00:18:37.039 "data_offset": 2048, 00:18:37.039 "data_size": 63488 00:18:37.039 } 00:18:37.039 ] 00:18:37.039 }' 00:18:37.039 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.039 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.039 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.039 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.039 09:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.973 "name": "raid_bdev1", 00:18:37.973 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:37.973 "strip_size_kb": 64, 00:18:37.973 "state": "online", 00:18:37.973 "raid_level": "raid5f", 00:18:37.973 "superblock": true, 00:18:37.973 "num_base_bdevs": 4, 00:18:37.973 "num_base_bdevs_discovered": 4, 00:18:37.973 "num_base_bdevs_operational": 4, 00:18:37.973 "process": { 00:18:37.973 "type": "rebuild", 00:18:37.973 "target": "spare", 00:18:37.973 "progress": { 00:18:37.973 "blocks": 63360, 00:18:37.973 "percent": 33 00:18:37.973 } 00:18:37.973 }, 00:18:37.973 "base_bdevs_list": [ 00:18:37.973 { 00:18:37.973 "name": "spare", 00:18:37.973 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:37.973 "is_configured": true, 00:18:37.973 "data_offset": 2048, 00:18:37.973 "data_size": 63488 00:18:37.973 }, 00:18:37.973 { 00:18:37.973 "name": "BaseBdev2", 00:18:37.973 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:37.973 "is_configured": true, 00:18:37.973 "data_offset": 2048, 00:18:37.973 "data_size": 63488 00:18:37.973 }, 00:18:37.973 { 00:18:37.973 "name": "BaseBdev3", 00:18:37.973 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:37.973 "is_configured": true, 00:18:37.973 "data_offset": 2048, 00:18:37.973 "data_size": 63488 00:18:37.973 }, 00:18:37.973 { 00:18:37.973 "name": "BaseBdev4", 00:18:37.973 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:37.973 "is_configured": true, 00:18:37.973 "data_offset": 2048, 00:18:37.973 "data_size": 63488 00:18:37.973 } 00:18:37.973 ] 00:18:37.973 }' 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.973 09:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.910 "name": "raid_bdev1", 00:18:38.910 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:38.910 "strip_size_kb": 64, 00:18:38.910 "state": "online", 00:18:38.910 "raid_level": "raid5f", 00:18:38.910 "superblock": true, 00:18:38.910 "num_base_bdevs": 4, 00:18:38.910 "num_base_bdevs_discovered": 4, 00:18:38.910 "num_base_bdevs_operational": 4, 00:18:38.910 "process": { 00:18:38.910 "type": "rebuild", 00:18:38.910 "target": "spare", 00:18:38.910 "progress": { 00:18:38.910 "blocks": 84480, 00:18:38.910 "percent": 44 00:18:38.910 } 00:18:38.910 }, 00:18:38.910 "base_bdevs_list": [ 00:18:38.910 { 00:18:38.910 "name": "spare", 00:18:38.910 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:38.910 "is_configured": true, 00:18:38.910 "data_offset": 2048, 00:18:38.910 "data_size": 63488 00:18:38.910 }, 00:18:38.910 { 00:18:38.910 "name": "BaseBdev2", 00:18:38.910 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:38.910 "is_configured": true, 00:18:38.910 "data_offset": 2048, 00:18:38.910 "data_size": 63488 00:18:38.910 }, 00:18:38.910 { 00:18:38.910 "name": "BaseBdev3", 00:18:38.910 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:38.910 "is_configured": true, 00:18:38.910 "data_offset": 2048, 00:18:38.910 "data_size": 63488 00:18:38.910 }, 00:18:38.910 { 00:18:38.910 "name": "BaseBdev4", 00:18:38.910 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:38.910 "is_configured": true, 00:18:38.910 "data_offset": 2048, 00:18:38.910 "data_size": 63488 00:18:38.910 } 00:18:38.910 ] 00:18:38.910 }' 00:18:38.910 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.170 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.170 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.170 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.170 09:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.106 "name": "raid_bdev1", 00:18:40.106 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:40.106 "strip_size_kb": 64, 00:18:40.106 "state": "online", 00:18:40.106 "raid_level": "raid5f", 00:18:40.106 "superblock": true, 00:18:40.106 "num_base_bdevs": 4, 00:18:40.106 "num_base_bdevs_discovered": 4, 00:18:40.106 "num_base_bdevs_operational": 4, 00:18:40.106 "process": { 00:18:40.106 "type": "rebuild", 00:18:40.106 "target": "spare", 00:18:40.106 "progress": { 00:18:40.106 "blocks": 107520, 00:18:40.106 "percent": 56 00:18:40.106 } 00:18:40.106 }, 00:18:40.106 "base_bdevs_list": [ 00:18:40.106 { 00:18:40.106 "name": "spare", 00:18:40.106 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 2048, 00:18:40.106 "data_size": 63488 00:18:40.106 }, 00:18:40.106 { 00:18:40.106 "name": "BaseBdev2", 00:18:40.106 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 2048, 00:18:40.106 "data_size": 63488 00:18:40.106 }, 00:18:40.106 { 00:18:40.106 "name": "BaseBdev3", 00:18:40.106 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 2048, 00:18:40.106 "data_size": 63488 00:18:40.106 }, 00:18:40.106 { 00:18:40.106 "name": "BaseBdev4", 00:18:40.106 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 2048, 00:18:40.106 "data_size": 63488 00:18:40.106 } 00:18:40.106 ] 00:18:40.106 }' 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.106 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.412 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.412 09:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.349 "name": "raid_bdev1", 00:18:41.349 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:41.349 "strip_size_kb": 64, 00:18:41.349 "state": "online", 00:18:41.349 "raid_level": "raid5f", 00:18:41.349 "superblock": true, 00:18:41.349 "num_base_bdevs": 4, 00:18:41.349 "num_base_bdevs_discovered": 4, 00:18:41.349 "num_base_bdevs_operational": 4, 00:18:41.349 "process": { 00:18:41.349 "type": "rebuild", 00:18:41.349 "target": "spare", 00:18:41.349 "progress": { 00:18:41.349 "blocks": 128640, 00:18:41.349 "percent": 67 00:18:41.349 } 00:18:41.349 }, 00:18:41.349 "base_bdevs_list": [ 00:18:41.349 { 00:18:41.349 "name": "spare", 00:18:41.349 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:41.349 "is_configured": true, 00:18:41.349 "data_offset": 2048, 00:18:41.349 "data_size": 63488 00:18:41.349 }, 00:18:41.349 { 00:18:41.349 "name": "BaseBdev2", 00:18:41.349 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:41.349 "is_configured": true, 00:18:41.349 "data_offset": 2048, 00:18:41.349 "data_size": 63488 00:18:41.349 }, 00:18:41.349 { 00:18:41.349 "name": "BaseBdev3", 00:18:41.349 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:41.349 "is_configured": true, 00:18:41.349 "data_offset": 2048, 00:18:41.349 "data_size": 63488 00:18:41.349 }, 00:18:41.349 { 00:18:41.349 "name": "BaseBdev4", 00:18:41.349 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:41.349 "is_configured": true, 00:18:41.349 "data_offset": 2048, 00:18:41.349 "data_size": 63488 00:18:41.349 } 00:18:41.349 ] 00:18:41.349 }' 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.349 09:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.284 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.545 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.545 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.545 "name": "raid_bdev1", 00:18:42.545 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:42.545 "strip_size_kb": 64, 00:18:42.545 "state": "online", 00:18:42.545 "raid_level": "raid5f", 00:18:42.545 "superblock": true, 00:18:42.545 "num_base_bdevs": 4, 00:18:42.545 "num_base_bdevs_discovered": 4, 00:18:42.545 "num_base_bdevs_operational": 4, 00:18:42.545 "process": { 00:18:42.545 "type": "rebuild", 00:18:42.545 "target": "spare", 00:18:42.545 "progress": { 00:18:42.545 "blocks": 149760, 00:18:42.545 "percent": 78 00:18:42.545 } 00:18:42.545 }, 00:18:42.545 "base_bdevs_list": [ 00:18:42.545 { 00:18:42.545 "name": "spare", 00:18:42.545 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:42.545 "is_configured": true, 00:18:42.545 "data_offset": 2048, 00:18:42.545 "data_size": 63488 00:18:42.545 }, 00:18:42.545 { 00:18:42.545 "name": "BaseBdev2", 00:18:42.545 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:42.545 "is_configured": true, 00:18:42.545 "data_offset": 2048, 00:18:42.545 "data_size": 63488 00:18:42.545 }, 00:18:42.545 { 00:18:42.545 "name": "BaseBdev3", 00:18:42.545 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:42.545 "is_configured": true, 00:18:42.545 "data_offset": 2048, 00:18:42.545 "data_size": 63488 00:18:42.545 }, 00:18:42.545 { 00:18:42.545 "name": "BaseBdev4", 00:18:42.545 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:42.545 "is_configured": true, 00:18:42.545 "data_offset": 2048, 00:18:42.545 "data_size": 63488 00:18:42.545 } 00:18:42.545 ] 00:18:42.545 }' 00:18:42.545 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.545 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.545 09:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.545 09:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.545 09:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.483 "name": "raid_bdev1", 00:18:43.483 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:43.483 "strip_size_kb": 64, 00:18:43.483 "state": "online", 00:18:43.483 "raid_level": "raid5f", 00:18:43.483 "superblock": true, 00:18:43.483 "num_base_bdevs": 4, 00:18:43.483 "num_base_bdevs_discovered": 4, 00:18:43.483 "num_base_bdevs_operational": 4, 00:18:43.483 "process": { 00:18:43.483 "type": "rebuild", 00:18:43.483 "target": "spare", 00:18:43.483 "progress": { 00:18:43.483 "blocks": 172800, 00:18:43.483 "percent": 90 00:18:43.483 } 00:18:43.483 }, 00:18:43.483 "base_bdevs_list": [ 00:18:43.483 { 00:18:43.483 "name": "spare", 00:18:43.483 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:43.483 "is_configured": true, 00:18:43.483 "data_offset": 2048, 00:18:43.483 "data_size": 63488 00:18:43.483 }, 00:18:43.483 { 00:18:43.483 "name": "BaseBdev2", 00:18:43.483 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:43.483 "is_configured": true, 00:18:43.483 "data_offset": 2048, 00:18:43.483 "data_size": 63488 00:18:43.483 }, 00:18:43.483 { 00:18:43.483 "name": "BaseBdev3", 00:18:43.483 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:43.483 "is_configured": true, 00:18:43.483 "data_offset": 2048, 00:18:43.483 "data_size": 63488 00:18:43.483 }, 00:18:43.483 { 00:18:43.483 "name": "BaseBdev4", 00:18:43.483 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:43.483 "is_configured": true, 00:18:43.483 "data_offset": 2048, 00:18:43.483 "data_size": 63488 00:18:43.483 } 00:18:43.483 ] 00:18:43.483 }' 00:18:43.483 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.741 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.741 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.741 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.741 09:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.680 [2024-10-11 09:52:29.041278] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.680 [2024-10-11 09:52:29.041368] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.680 [2024-10-11 09:52:29.041509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.680 "name": "raid_bdev1", 00:18:44.680 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:44.680 "strip_size_kb": 64, 00:18:44.680 "state": "online", 00:18:44.680 "raid_level": "raid5f", 00:18:44.680 "superblock": true, 00:18:44.680 "num_base_bdevs": 4, 00:18:44.680 "num_base_bdevs_discovered": 4, 00:18:44.680 "num_base_bdevs_operational": 4, 00:18:44.680 "base_bdevs_list": [ 00:18:44.680 { 00:18:44.680 "name": "spare", 00:18:44.680 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:44.680 "is_configured": true, 00:18:44.680 "data_offset": 2048, 00:18:44.680 "data_size": 63488 00:18:44.680 }, 00:18:44.680 { 00:18:44.680 "name": "BaseBdev2", 00:18:44.680 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:44.680 "is_configured": true, 00:18:44.680 "data_offset": 2048, 00:18:44.680 "data_size": 63488 00:18:44.680 }, 00:18:44.680 { 00:18:44.680 "name": "BaseBdev3", 00:18:44.680 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:44.680 "is_configured": true, 00:18:44.680 "data_offset": 2048, 00:18:44.680 "data_size": 63488 00:18:44.680 }, 00:18:44.680 { 00:18:44.680 "name": "BaseBdev4", 00:18:44.680 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:44.680 "is_configured": true, 00:18:44.680 "data_offset": 2048, 00:18:44.680 "data_size": 63488 00:18:44.680 } 00:18:44.680 ] 00:18:44.680 }' 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:44.680 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.940 "name": "raid_bdev1", 00:18:44.940 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:44.940 "strip_size_kb": 64, 00:18:44.940 "state": "online", 00:18:44.940 "raid_level": "raid5f", 00:18:44.940 "superblock": true, 00:18:44.940 "num_base_bdevs": 4, 00:18:44.940 "num_base_bdevs_discovered": 4, 00:18:44.940 "num_base_bdevs_operational": 4, 00:18:44.940 "base_bdevs_list": [ 00:18:44.940 { 00:18:44.940 "name": "spare", 00:18:44.940 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 }, 00:18:44.940 { 00:18:44.940 "name": "BaseBdev2", 00:18:44.940 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 }, 00:18:44.940 { 00:18:44.940 "name": "BaseBdev3", 00:18:44.940 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 }, 00:18:44.940 { 00:18:44.940 "name": "BaseBdev4", 00:18:44.940 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 } 00:18:44.940 ] 00:18:44.940 }' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.940 "name": "raid_bdev1", 00:18:44.940 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:44.940 "strip_size_kb": 64, 00:18:44.940 "state": "online", 00:18:44.940 "raid_level": "raid5f", 00:18:44.940 "superblock": true, 00:18:44.940 "num_base_bdevs": 4, 00:18:44.940 "num_base_bdevs_discovered": 4, 00:18:44.940 "num_base_bdevs_operational": 4, 00:18:44.940 "base_bdevs_list": [ 00:18:44.940 { 00:18:44.940 "name": "spare", 00:18:44.940 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 }, 00:18:44.940 { 00:18:44.940 "name": "BaseBdev2", 00:18:44.940 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 }, 00:18:44.940 { 00:18:44.940 "name": "BaseBdev3", 00:18:44.940 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 }, 00:18:44.940 { 00:18:44.940 "name": "BaseBdev4", 00:18:44.940 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:44.940 "is_configured": true, 00:18:44.940 "data_offset": 2048, 00:18:44.940 "data_size": 63488 00:18:44.940 } 00:18:44.940 ] 00:18:44.940 }' 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.940 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.508 [2024-10-11 09:52:29.971889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.508 [2024-10-11 09:52:29.971933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.508 [2024-10-11 09:52:29.972028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.508 [2024-10-11 09:52:29.972157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.508 [2024-10-11 09:52:29.972174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.508 09:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.508 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:45.767 /dev/nbd0 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.767 1+0 records in 00:18:45.767 1+0 records out 00:18:45.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002902 s, 14.1 MB/s 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.767 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:46.024 /dev/nbd1 00:18:46.024 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:46.024 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:46.024 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:46.024 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:46.024 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:46.024 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.025 1+0 records in 00:18:46.025 1+0 records out 00:18:46.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290333 s, 14.1 MB/s 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.025 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.283 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.540 09:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.799 [2024-10-11 09:52:31.244317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.799 [2024-10-11 09:52:31.244391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.799 [2024-10-11 09:52:31.244431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:46.799 [2024-10-11 09:52:31.244444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.799 [2024-10-11 09:52:31.247086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.799 [2024-10-11 09:52:31.247130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.799 [2024-10-11 09:52:31.247235] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.799 [2024-10-11 09:52:31.247305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.799 [2024-10-11 09:52:31.247460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.799 [2024-10-11 09:52:31.247565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.799 [2024-10-11 09:52:31.247659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.799 spare 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.799 [2024-10-11 09:52:31.347631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:46.799 [2024-10-11 09:52:31.347707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:46.799 [2024-10-11 09:52:31.348126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:46.799 [2024-10-11 09:52:31.357317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:46.799 [2024-10-11 09:52:31.357349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:46.799 [2024-10-11 09:52:31.357632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.799 "name": "raid_bdev1", 00:18:46.799 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:46.799 "strip_size_kb": 64, 00:18:46.799 "state": "online", 00:18:46.799 "raid_level": "raid5f", 00:18:46.799 "superblock": true, 00:18:46.799 "num_base_bdevs": 4, 00:18:46.799 "num_base_bdevs_discovered": 4, 00:18:46.799 "num_base_bdevs_operational": 4, 00:18:46.799 "base_bdevs_list": [ 00:18:46.799 { 00:18:46.799 "name": "spare", 00:18:46.799 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:46.799 "is_configured": true, 00:18:46.799 "data_offset": 2048, 00:18:46.799 "data_size": 63488 00:18:46.799 }, 00:18:46.799 { 00:18:46.799 "name": "BaseBdev2", 00:18:46.799 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:46.799 "is_configured": true, 00:18:46.799 "data_offset": 2048, 00:18:46.799 "data_size": 63488 00:18:46.799 }, 00:18:46.799 { 00:18:46.799 "name": "BaseBdev3", 00:18:46.799 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:46.799 "is_configured": true, 00:18:46.799 "data_offset": 2048, 00:18:46.799 "data_size": 63488 00:18:46.799 }, 00:18:46.799 { 00:18:46.799 "name": "BaseBdev4", 00:18:46.799 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:46.799 "is_configured": true, 00:18:46.799 "data_offset": 2048, 00:18:46.799 "data_size": 63488 00:18:46.799 } 00:18:46.799 ] 00:18:46.799 }' 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.799 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.365 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.365 "name": "raid_bdev1", 00:18:47.365 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:47.365 "strip_size_kb": 64, 00:18:47.365 "state": "online", 00:18:47.365 "raid_level": "raid5f", 00:18:47.365 "superblock": true, 00:18:47.365 "num_base_bdevs": 4, 00:18:47.365 "num_base_bdevs_discovered": 4, 00:18:47.365 "num_base_bdevs_operational": 4, 00:18:47.365 "base_bdevs_list": [ 00:18:47.365 { 00:18:47.365 "name": "spare", 00:18:47.365 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:47.365 "is_configured": true, 00:18:47.365 "data_offset": 2048, 00:18:47.365 "data_size": 63488 00:18:47.365 }, 00:18:47.365 { 00:18:47.366 "name": "BaseBdev2", 00:18:47.366 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:47.366 "is_configured": true, 00:18:47.366 "data_offset": 2048, 00:18:47.366 "data_size": 63488 00:18:47.366 }, 00:18:47.366 { 00:18:47.366 "name": "BaseBdev3", 00:18:47.366 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:47.366 "is_configured": true, 00:18:47.366 "data_offset": 2048, 00:18:47.366 "data_size": 63488 00:18:47.366 }, 00:18:47.366 { 00:18:47.366 "name": "BaseBdev4", 00:18:47.366 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:47.366 "is_configured": true, 00:18:47.366 "data_offset": 2048, 00:18:47.366 "data_size": 63488 00:18:47.366 } 00:18:47.366 ] 00:18:47.366 }' 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.366 [2024-10-11 09:52:31.933966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.366 "name": "raid_bdev1", 00:18:47.366 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:47.366 "strip_size_kb": 64, 00:18:47.366 "state": "online", 00:18:47.366 "raid_level": "raid5f", 00:18:47.366 "superblock": true, 00:18:47.366 "num_base_bdevs": 4, 00:18:47.366 "num_base_bdevs_discovered": 3, 00:18:47.366 "num_base_bdevs_operational": 3, 00:18:47.366 "base_bdevs_list": [ 00:18:47.366 { 00:18:47.366 "name": null, 00:18:47.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.366 "is_configured": false, 00:18:47.366 "data_offset": 0, 00:18:47.366 "data_size": 63488 00:18:47.366 }, 00:18:47.366 { 00:18:47.366 "name": "BaseBdev2", 00:18:47.366 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:47.366 "is_configured": true, 00:18:47.366 "data_offset": 2048, 00:18:47.366 "data_size": 63488 00:18:47.366 }, 00:18:47.366 { 00:18:47.366 "name": "BaseBdev3", 00:18:47.366 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:47.366 "is_configured": true, 00:18:47.366 "data_offset": 2048, 00:18:47.366 "data_size": 63488 00:18:47.366 }, 00:18:47.366 { 00:18:47.366 "name": "BaseBdev4", 00:18:47.366 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:47.366 "is_configured": true, 00:18:47.366 "data_offset": 2048, 00:18:47.366 "data_size": 63488 00:18:47.366 } 00:18:47.366 ] 00:18:47.366 }' 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.366 09:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.932 09:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.932 09:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.932 09:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.932 [2024-10-11 09:52:32.369309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.932 [2024-10-11 09:52:32.369556] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.932 [2024-10-11 09:52:32.369585] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.932 [2024-10-11 09:52:32.369624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.932 [2024-10-11 09:52:32.389503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:47.932 09:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.932 09:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:47.932 [2024-10-11 09:52:32.401703] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.866 "name": "raid_bdev1", 00:18:48.866 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:48.866 "strip_size_kb": 64, 00:18:48.866 "state": "online", 00:18:48.866 "raid_level": "raid5f", 00:18:48.866 "superblock": true, 00:18:48.866 "num_base_bdevs": 4, 00:18:48.866 "num_base_bdevs_discovered": 4, 00:18:48.866 "num_base_bdevs_operational": 4, 00:18:48.866 "process": { 00:18:48.866 "type": "rebuild", 00:18:48.866 "target": "spare", 00:18:48.866 "progress": { 00:18:48.866 "blocks": 17280, 00:18:48.866 "percent": 9 00:18:48.866 } 00:18:48.866 }, 00:18:48.866 "base_bdevs_list": [ 00:18:48.866 { 00:18:48.866 "name": "spare", 00:18:48.866 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:48.866 "is_configured": true, 00:18:48.866 "data_offset": 2048, 00:18:48.866 "data_size": 63488 00:18:48.866 }, 00:18:48.866 { 00:18:48.866 "name": "BaseBdev2", 00:18:48.866 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:48.866 "is_configured": true, 00:18:48.866 "data_offset": 2048, 00:18:48.866 "data_size": 63488 00:18:48.866 }, 00:18:48.866 { 00:18:48.866 "name": "BaseBdev3", 00:18:48.866 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:48.866 "is_configured": true, 00:18:48.866 "data_offset": 2048, 00:18:48.866 "data_size": 63488 00:18:48.866 }, 00:18:48.866 { 00:18:48.866 "name": "BaseBdev4", 00:18:48.866 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:48.866 "is_configured": true, 00:18:48.866 "data_offset": 2048, 00:18:48.866 "data_size": 63488 00:18:48.866 } 00:18:48.866 ] 00:18:48.866 }' 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.866 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.125 [2024-10-11 09:52:33.533084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.125 [2024-10-11 09:52:33.611070] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.125 [2024-10-11 09:52:33.611154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.125 [2024-10-11 09:52:33.611174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.125 [2024-10-11 09:52:33.611189] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.125 "name": "raid_bdev1", 00:18:49.125 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:49.125 "strip_size_kb": 64, 00:18:49.125 "state": "online", 00:18:49.125 "raid_level": "raid5f", 00:18:49.125 "superblock": true, 00:18:49.125 "num_base_bdevs": 4, 00:18:49.125 "num_base_bdevs_discovered": 3, 00:18:49.125 "num_base_bdevs_operational": 3, 00:18:49.125 "base_bdevs_list": [ 00:18:49.125 { 00:18:49.125 "name": null, 00:18:49.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.125 "is_configured": false, 00:18:49.125 "data_offset": 0, 00:18:49.125 "data_size": 63488 00:18:49.125 }, 00:18:49.125 { 00:18:49.125 "name": "BaseBdev2", 00:18:49.125 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:49.125 "is_configured": true, 00:18:49.125 "data_offset": 2048, 00:18:49.125 "data_size": 63488 00:18:49.125 }, 00:18:49.125 { 00:18:49.125 "name": "BaseBdev3", 00:18:49.125 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:49.125 "is_configured": true, 00:18:49.125 "data_offset": 2048, 00:18:49.125 "data_size": 63488 00:18:49.125 }, 00:18:49.125 { 00:18:49.125 "name": "BaseBdev4", 00:18:49.125 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:49.125 "is_configured": true, 00:18:49.125 "data_offset": 2048, 00:18:49.125 "data_size": 63488 00:18:49.125 } 00:18:49.125 ] 00:18:49.125 }' 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.125 09:52:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.693 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.693 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.693 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.693 [2024-10-11 09:52:34.057299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.693 [2024-10-11 09:52:34.057388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.693 [2024-10-11 09:52:34.057422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:49.693 [2024-10-11 09:52:34.057434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.693 [2024-10-11 09:52:34.057995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.693 [2024-10-11 09:52:34.058031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.693 [2024-10-11 09:52:34.058143] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.693 [2024-10-11 09:52:34.058169] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:49.693 [2024-10-11 09:52:34.058185] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:49.693 [2024-10-11 09:52:34.058211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.693 [2024-10-11 09:52:34.076449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:49.693 spare 00:18:49.693 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.693 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:49.693 [2024-10-11 09:52:34.087504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.629 "name": "raid_bdev1", 00:18:50.629 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:50.629 "strip_size_kb": 64, 00:18:50.629 "state": "online", 00:18:50.629 "raid_level": "raid5f", 00:18:50.629 "superblock": true, 00:18:50.629 "num_base_bdevs": 4, 00:18:50.629 "num_base_bdevs_discovered": 4, 00:18:50.629 "num_base_bdevs_operational": 4, 00:18:50.629 "process": { 00:18:50.629 "type": "rebuild", 00:18:50.629 "target": "spare", 00:18:50.629 "progress": { 00:18:50.629 "blocks": 19200, 00:18:50.629 "percent": 10 00:18:50.629 } 00:18:50.629 }, 00:18:50.629 "base_bdevs_list": [ 00:18:50.629 { 00:18:50.629 "name": "spare", 00:18:50.629 "uuid": "a911d585-d6aa-561d-a8e7-4bca4c78812c", 00:18:50.629 "is_configured": true, 00:18:50.629 "data_offset": 2048, 00:18:50.629 "data_size": 63488 00:18:50.629 }, 00:18:50.629 { 00:18:50.629 "name": "BaseBdev2", 00:18:50.629 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:50.629 "is_configured": true, 00:18:50.629 "data_offset": 2048, 00:18:50.629 "data_size": 63488 00:18:50.629 }, 00:18:50.629 { 00:18:50.629 "name": "BaseBdev3", 00:18:50.629 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:50.629 "is_configured": true, 00:18:50.629 "data_offset": 2048, 00:18:50.629 "data_size": 63488 00:18:50.629 }, 00:18:50.629 { 00:18:50.629 "name": "BaseBdev4", 00:18:50.629 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:50.629 "is_configured": true, 00:18:50.629 "data_offset": 2048, 00:18:50.629 "data_size": 63488 00:18:50.629 } 00:18:50.629 ] 00:18:50.629 }' 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.629 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.629 [2024-10-11 09:52:35.211098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.888 [2024-10-11 09:52:35.296893] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.888 [2024-10-11 09:52:35.297011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.888 [2024-10-11 09:52:35.297037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.888 [2024-10-11 09:52:35.297048] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.888 "name": "raid_bdev1", 00:18:50.888 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:50.888 "strip_size_kb": 64, 00:18:50.888 "state": "online", 00:18:50.888 "raid_level": "raid5f", 00:18:50.888 "superblock": true, 00:18:50.888 "num_base_bdevs": 4, 00:18:50.888 "num_base_bdevs_discovered": 3, 00:18:50.888 "num_base_bdevs_operational": 3, 00:18:50.888 "base_bdevs_list": [ 00:18:50.888 { 00:18:50.888 "name": null, 00:18:50.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.888 "is_configured": false, 00:18:50.888 "data_offset": 0, 00:18:50.888 "data_size": 63488 00:18:50.888 }, 00:18:50.888 { 00:18:50.888 "name": "BaseBdev2", 00:18:50.888 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:50.888 "is_configured": true, 00:18:50.888 "data_offset": 2048, 00:18:50.888 "data_size": 63488 00:18:50.888 }, 00:18:50.888 { 00:18:50.888 "name": "BaseBdev3", 00:18:50.888 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:50.888 "is_configured": true, 00:18:50.888 "data_offset": 2048, 00:18:50.888 "data_size": 63488 00:18:50.888 }, 00:18:50.888 { 00:18:50.888 "name": "BaseBdev4", 00:18:50.888 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:50.888 "is_configured": true, 00:18:50.888 "data_offset": 2048, 00:18:50.888 "data_size": 63488 00:18:50.888 } 00:18:50.888 ] 00:18:50.888 }' 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.888 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.460 "name": "raid_bdev1", 00:18:51.460 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:51.460 "strip_size_kb": 64, 00:18:51.460 "state": "online", 00:18:51.460 "raid_level": "raid5f", 00:18:51.460 "superblock": true, 00:18:51.460 "num_base_bdevs": 4, 00:18:51.460 "num_base_bdevs_discovered": 3, 00:18:51.460 "num_base_bdevs_operational": 3, 00:18:51.460 "base_bdevs_list": [ 00:18:51.460 { 00:18:51.460 "name": null, 00:18:51.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.460 "is_configured": false, 00:18:51.460 "data_offset": 0, 00:18:51.460 "data_size": 63488 00:18:51.460 }, 00:18:51.460 { 00:18:51.460 "name": "BaseBdev2", 00:18:51.460 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:51.460 "is_configured": true, 00:18:51.460 "data_offset": 2048, 00:18:51.460 "data_size": 63488 00:18:51.460 }, 00:18:51.460 { 00:18:51.460 "name": "BaseBdev3", 00:18:51.460 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:51.460 "is_configured": true, 00:18:51.460 "data_offset": 2048, 00:18:51.460 "data_size": 63488 00:18:51.460 }, 00:18:51.460 { 00:18:51.460 "name": "BaseBdev4", 00:18:51.460 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:51.460 "is_configured": true, 00:18:51.460 "data_offset": 2048, 00:18:51.460 "data_size": 63488 00:18:51.460 } 00:18:51.460 ] 00:18:51.460 }' 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.460 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.460 [2024-10-11 09:52:35.900304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.460 [2024-10-11 09:52:35.900373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.460 [2024-10-11 09:52:35.900400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:51.460 [2024-10-11 09:52:35.900413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.460 [2024-10-11 09:52:35.900966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.460 [2024-10-11 09:52:35.900995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.461 [2024-10-11 09:52:35.901085] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:51.461 [2024-10-11 09:52:35.901108] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.461 [2024-10-11 09:52:35.901121] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.461 [2024-10-11 09:52:35.901133] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:51.461 BaseBdev1 00:18:51.461 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.461 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.413 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.414 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.414 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.414 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.414 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.414 "name": "raid_bdev1", 00:18:52.414 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:52.414 "strip_size_kb": 64, 00:18:52.414 "state": "online", 00:18:52.414 "raid_level": "raid5f", 00:18:52.414 "superblock": true, 00:18:52.414 "num_base_bdevs": 4, 00:18:52.414 "num_base_bdevs_discovered": 3, 00:18:52.414 "num_base_bdevs_operational": 3, 00:18:52.414 "base_bdevs_list": [ 00:18:52.414 { 00:18:52.414 "name": null, 00:18:52.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.414 "is_configured": false, 00:18:52.414 "data_offset": 0, 00:18:52.414 "data_size": 63488 00:18:52.414 }, 00:18:52.414 { 00:18:52.414 "name": "BaseBdev2", 00:18:52.414 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:52.414 "is_configured": true, 00:18:52.414 "data_offset": 2048, 00:18:52.414 "data_size": 63488 00:18:52.414 }, 00:18:52.414 { 00:18:52.414 "name": "BaseBdev3", 00:18:52.414 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:52.414 "is_configured": true, 00:18:52.414 "data_offset": 2048, 00:18:52.414 "data_size": 63488 00:18:52.414 }, 00:18:52.414 { 00:18:52.414 "name": "BaseBdev4", 00:18:52.414 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:52.414 "is_configured": true, 00:18:52.414 "data_offset": 2048, 00:18:52.414 "data_size": 63488 00:18:52.414 } 00:18:52.414 ] 00:18:52.414 }' 00:18:52.414 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.414 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.673 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.931 "name": "raid_bdev1", 00:18:52.931 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:52.931 "strip_size_kb": 64, 00:18:52.931 "state": "online", 00:18:52.931 "raid_level": "raid5f", 00:18:52.931 "superblock": true, 00:18:52.931 "num_base_bdevs": 4, 00:18:52.931 "num_base_bdevs_discovered": 3, 00:18:52.931 "num_base_bdevs_operational": 3, 00:18:52.931 "base_bdevs_list": [ 00:18:52.931 { 00:18:52.931 "name": null, 00:18:52.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.931 "is_configured": false, 00:18:52.931 "data_offset": 0, 00:18:52.931 "data_size": 63488 00:18:52.931 }, 00:18:52.931 { 00:18:52.931 "name": "BaseBdev2", 00:18:52.931 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:52.931 "is_configured": true, 00:18:52.931 "data_offset": 2048, 00:18:52.931 "data_size": 63488 00:18:52.931 }, 00:18:52.931 { 00:18:52.931 "name": "BaseBdev3", 00:18:52.931 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:52.931 "is_configured": true, 00:18:52.931 "data_offset": 2048, 00:18:52.931 "data_size": 63488 00:18:52.931 }, 00:18:52.931 { 00:18:52.931 "name": "BaseBdev4", 00:18:52.931 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:52.931 "is_configured": true, 00:18:52.931 "data_offset": 2048, 00:18:52.931 "data_size": 63488 00:18:52.931 } 00:18:52.931 ] 00:18:52.931 }' 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:52.931 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.932 [2024-10-11 09:52:37.382188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.932 [2024-10-11 09:52:37.382393] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.932 [2024-10-11 09:52:37.382419] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:52.932 request: 00:18:52.932 { 00:18:52.932 "base_bdev": "BaseBdev1", 00:18:52.932 "raid_bdev": "raid_bdev1", 00:18:52.932 "method": "bdev_raid_add_base_bdev", 00:18:52.932 "req_id": 1 00:18:52.932 } 00:18:52.932 Got JSON-RPC error response 00:18:52.932 response: 00:18:52.932 { 00:18:52.932 "code": -22, 00:18:52.932 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:52.932 } 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.932 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.867 "name": "raid_bdev1", 00:18:53.867 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:53.867 "strip_size_kb": 64, 00:18:53.867 "state": "online", 00:18:53.867 "raid_level": "raid5f", 00:18:53.867 "superblock": true, 00:18:53.867 "num_base_bdevs": 4, 00:18:53.867 "num_base_bdevs_discovered": 3, 00:18:53.867 "num_base_bdevs_operational": 3, 00:18:53.867 "base_bdevs_list": [ 00:18:53.867 { 00:18:53.867 "name": null, 00:18:53.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.867 "is_configured": false, 00:18:53.867 "data_offset": 0, 00:18:53.867 "data_size": 63488 00:18:53.867 }, 00:18:53.867 { 00:18:53.867 "name": "BaseBdev2", 00:18:53.867 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:53.867 "is_configured": true, 00:18:53.867 "data_offset": 2048, 00:18:53.867 "data_size": 63488 00:18:53.867 }, 00:18:53.867 { 00:18:53.867 "name": "BaseBdev3", 00:18:53.867 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:53.867 "is_configured": true, 00:18:53.867 "data_offset": 2048, 00:18:53.867 "data_size": 63488 00:18:53.867 }, 00:18:53.867 { 00:18:53.867 "name": "BaseBdev4", 00:18:53.867 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:53.867 "is_configured": true, 00:18:53.867 "data_offset": 2048, 00:18:53.867 "data_size": 63488 00:18:53.867 } 00:18:53.867 ] 00:18:53.867 }' 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.867 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.435 "name": "raid_bdev1", 00:18:54.435 "uuid": "1be9e952-2573-41a6-8d70-a99ed5d911b3", 00:18:54.435 "strip_size_kb": 64, 00:18:54.435 "state": "online", 00:18:54.435 "raid_level": "raid5f", 00:18:54.435 "superblock": true, 00:18:54.435 "num_base_bdevs": 4, 00:18:54.435 "num_base_bdevs_discovered": 3, 00:18:54.435 "num_base_bdevs_operational": 3, 00:18:54.435 "base_bdevs_list": [ 00:18:54.435 { 00:18:54.435 "name": null, 00:18:54.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.435 "is_configured": false, 00:18:54.435 "data_offset": 0, 00:18:54.435 "data_size": 63488 00:18:54.435 }, 00:18:54.435 { 00:18:54.435 "name": "BaseBdev2", 00:18:54.435 "uuid": "50e4dcc1-5633-5dd9-ba4d-838d74bd2d56", 00:18:54.435 "is_configured": true, 00:18:54.435 "data_offset": 2048, 00:18:54.435 "data_size": 63488 00:18:54.435 }, 00:18:54.435 { 00:18:54.435 "name": "BaseBdev3", 00:18:54.435 "uuid": "cc4de198-3d15-51d0-b7dc-0ba6f4592b33", 00:18:54.435 "is_configured": true, 00:18:54.435 "data_offset": 2048, 00:18:54.435 "data_size": 63488 00:18:54.435 }, 00:18:54.435 { 00:18:54.435 "name": "BaseBdev4", 00:18:54.435 "uuid": "0ce2039b-f82f-533a-89ec-60c116c98001", 00:18:54.435 "is_configured": true, 00:18:54.435 "data_offset": 2048, 00:18:54.435 "data_size": 63488 00:18:54.435 } 00:18:54.435 ] 00:18:54.435 }' 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.435 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85698 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85698 ']' 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85698 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.435 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85698 00:18:54.694 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.694 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.694 killing process with pid 85698 00:18:54.694 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85698' 00:18:54.694 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85698 00:18:54.694 Received shutdown signal, test time was about 60.000000 seconds 00:18:54.694 00:18:54.694 Latency(us) 00:18:54.694 [2024-10-11T09:52:39.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.694 [2024-10-11T09:52:39.326Z] =================================================================================================================== 00:18:54.694 [2024-10-11T09:52:39.326Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.694 [2024-10-11 09:52:39.078815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.694 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85698 00:18:54.694 [2024-10-11 09:52:39.078957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.694 [2024-10-11 09:52:39.079054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.694 [2024-10-11 09:52:39.079069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:55.261 [2024-10-11 09:52:39.604026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.198 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:56.198 00:18:56.198 real 0m27.111s 00:18:56.198 user 0m33.768s 00:18:56.198 sys 0m3.124s 00:18:56.198 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.198 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.198 ************************************ 00:18:56.198 END TEST raid5f_rebuild_test_sb 00:18:56.198 ************************************ 00:18:56.198 09:52:40 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:56.198 09:52:40 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:56.198 09:52:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:56.198 09:52:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.198 09:52:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.458 ************************************ 00:18:56.458 START TEST raid_state_function_test_sb_4k 00:18:56.458 ************************************ 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86504 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:56.458 Process raid pid: 86504 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86504' 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86504 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86504 ']' 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.458 09:52:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.458 [2024-10-11 09:52:40.919431] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:18:56.459 [2024-10-11 09:52:40.919562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.459 [2024-10-11 09:52:41.087418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.718 [2024-10-11 09:52:41.223064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.978 [2024-10-11 09:52:41.474314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.978 [2024-10-11 09:52:41.474384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 [2024-10-11 09:52:41.805893] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.240 [2024-10-11 09:52:41.805946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.240 [2024-10-11 09:52:41.805957] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.240 [2024-10-11 09:52:41.805967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.240 "name": "Existed_Raid", 00:18:57.240 "uuid": "3f64acc0-a2b9-4d84-911b-70c2420d74b6", 00:18:57.240 "strip_size_kb": 0, 00:18:57.240 "state": "configuring", 00:18:57.240 "raid_level": "raid1", 00:18:57.240 "superblock": true, 00:18:57.240 "num_base_bdevs": 2, 00:18:57.240 "num_base_bdevs_discovered": 0, 00:18:57.240 "num_base_bdevs_operational": 2, 00:18:57.240 "base_bdevs_list": [ 00:18:57.240 { 00:18:57.240 "name": "BaseBdev1", 00:18:57.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.240 "is_configured": false, 00:18:57.240 "data_offset": 0, 00:18:57.240 "data_size": 0 00:18:57.240 }, 00:18:57.240 { 00:18:57.240 "name": "BaseBdev2", 00:18:57.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.240 "is_configured": false, 00:18:57.240 "data_offset": 0, 00:18:57.240 "data_size": 0 00:18:57.240 } 00:18:57.240 ] 00:18:57.240 }' 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.240 09:52:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 [2024-10-11 09:52:42.225116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.807 [2024-10-11 09:52:42.225154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 [2024-10-11 09:52:42.233130] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.807 [2024-10-11 09:52:42.233187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.807 [2024-10-11 09:52:42.233196] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.807 [2024-10-11 09:52:42.233208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 [2024-10-11 09:52:42.282331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.807 BaseBdev1 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.807 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.807 [ 00:18:57.807 { 00:18:57.807 "name": "BaseBdev1", 00:18:57.807 "aliases": [ 00:18:57.807 "00b28056-5ba8-4040-a86f-f96216a83fba" 00:18:57.807 ], 00:18:57.807 "product_name": "Malloc disk", 00:18:57.807 "block_size": 4096, 00:18:57.807 "num_blocks": 8192, 00:18:57.807 "uuid": "00b28056-5ba8-4040-a86f-f96216a83fba", 00:18:57.807 "assigned_rate_limits": { 00:18:57.807 "rw_ios_per_sec": 0, 00:18:57.807 "rw_mbytes_per_sec": 0, 00:18:57.807 "r_mbytes_per_sec": 0, 00:18:57.807 "w_mbytes_per_sec": 0 00:18:57.807 }, 00:18:57.807 "claimed": true, 00:18:57.807 "claim_type": "exclusive_write", 00:18:57.807 "zoned": false, 00:18:57.807 "supported_io_types": { 00:18:57.807 "read": true, 00:18:57.807 "write": true, 00:18:57.807 "unmap": true, 00:18:57.807 "flush": true, 00:18:57.807 "reset": true, 00:18:57.807 "nvme_admin": false, 00:18:57.807 "nvme_io": false, 00:18:57.807 "nvme_io_md": false, 00:18:57.807 "write_zeroes": true, 00:18:57.807 "zcopy": true, 00:18:57.807 "get_zone_info": false, 00:18:57.808 "zone_management": false, 00:18:57.808 "zone_append": false, 00:18:57.808 "compare": false, 00:18:57.808 "compare_and_write": false, 00:18:57.808 "abort": true, 00:18:57.808 "seek_hole": false, 00:18:57.808 "seek_data": false, 00:18:57.808 "copy": true, 00:18:57.808 "nvme_iov_md": false 00:18:57.808 }, 00:18:57.808 "memory_domains": [ 00:18:57.808 { 00:18:57.808 "dma_device_id": "system", 00:18:57.808 "dma_device_type": 1 00:18:57.808 }, 00:18:57.808 { 00:18:57.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.808 "dma_device_type": 2 00:18:57.808 } 00:18:57.808 ], 00:18:57.808 "driver_specific": {} 00:18:57.808 } 00:18:57.808 ] 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.808 "name": "Existed_Raid", 00:18:57.808 "uuid": "aee89a5c-bf30-4d88-a1fb-f1b5dbdfa7e5", 00:18:57.808 "strip_size_kb": 0, 00:18:57.808 "state": "configuring", 00:18:57.808 "raid_level": "raid1", 00:18:57.808 "superblock": true, 00:18:57.808 "num_base_bdevs": 2, 00:18:57.808 "num_base_bdevs_discovered": 1, 00:18:57.808 "num_base_bdevs_operational": 2, 00:18:57.808 "base_bdevs_list": [ 00:18:57.808 { 00:18:57.808 "name": "BaseBdev1", 00:18:57.808 "uuid": "00b28056-5ba8-4040-a86f-f96216a83fba", 00:18:57.808 "is_configured": true, 00:18:57.808 "data_offset": 256, 00:18:57.808 "data_size": 7936 00:18:57.808 }, 00:18:57.808 { 00:18:57.808 "name": "BaseBdev2", 00:18:57.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.808 "is_configured": false, 00:18:57.808 "data_offset": 0, 00:18:57.808 "data_size": 0 00:18:57.808 } 00:18:57.808 ] 00:18:57.808 }' 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.808 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.377 [2024-10-11 09:52:42.777567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.377 [2024-10-11 09:52:42.777630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.377 [2024-10-11 09:52:42.789600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.377 [2024-10-11 09:52:42.791455] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.377 [2024-10-11 09:52:42.791501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.377 "name": "Existed_Raid", 00:18:58.377 "uuid": "22bd28f9-81da-4592-b83a-3324c4da3ad0", 00:18:58.377 "strip_size_kb": 0, 00:18:58.377 "state": "configuring", 00:18:58.377 "raid_level": "raid1", 00:18:58.377 "superblock": true, 00:18:58.377 "num_base_bdevs": 2, 00:18:58.377 "num_base_bdevs_discovered": 1, 00:18:58.377 "num_base_bdevs_operational": 2, 00:18:58.377 "base_bdevs_list": [ 00:18:58.377 { 00:18:58.377 "name": "BaseBdev1", 00:18:58.377 "uuid": "00b28056-5ba8-4040-a86f-f96216a83fba", 00:18:58.377 "is_configured": true, 00:18:58.377 "data_offset": 256, 00:18:58.377 "data_size": 7936 00:18:58.377 }, 00:18:58.377 { 00:18:58.377 "name": "BaseBdev2", 00:18:58.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.377 "is_configured": false, 00:18:58.377 "data_offset": 0, 00:18:58.377 "data_size": 0 00:18:58.377 } 00:18:58.377 ] 00:18:58.377 }' 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.377 09:52:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.637 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:58.637 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.637 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.897 [2024-10-11 09:52:43.278030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.897 [2024-10-11 09:52:43.278322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:58.897 [2024-10-11 09:52:43.278338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.897 [2024-10-11 09:52:43.278613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:58.897 [2024-10-11 09:52:43.278887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:58.897 [2024-10-11 09:52:43.278912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:58.897 BaseBdev2 00:18:58.897 [2024-10-11 09:52:43.279074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.897 [ 00:18:58.897 { 00:18:58.897 "name": "BaseBdev2", 00:18:58.897 "aliases": [ 00:18:58.897 "3ba846a3-0aa8-4f38-b332-9d497eda8b0d" 00:18:58.897 ], 00:18:58.897 "product_name": "Malloc disk", 00:18:58.897 "block_size": 4096, 00:18:58.897 "num_blocks": 8192, 00:18:58.897 "uuid": "3ba846a3-0aa8-4f38-b332-9d497eda8b0d", 00:18:58.897 "assigned_rate_limits": { 00:18:58.897 "rw_ios_per_sec": 0, 00:18:58.897 "rw_mbytes_per_sec": 0, 00:18:58.897 "r_mbytes_per_sec": 0, 00:18:58.897 "w_mbytes_per_sec": 0 00:18:58.897 }, 00:18:58.897 "claimed": true, 00:18:58.897 "claim_type": "exclusive_write", 00:18:58.897 "zoned": false, 00:18:58.897 "supported_io_types": { 00:18:58.897 "read": true, 00:18:58.897 "write": true, 00:18:58.897 "unmap": true, 00:18:58.897 "flush": true, 00:18:58.897 "reset": true, 00:18:58.897 "nvme_admin": false, 00:18:58.897 "nvme_io": false, 00:18:58.897 "nvme_io_md": false, 00:18:58.897 "write_zeroes": true, 00:18:58.897 "zcopy": true, 00:18:58.897 "get_zone_info": false, 00:18:58.897 "zone_management": false, 00:18:58.897 "zone_append": false, 00:18:58.897 "compare": false, 00:18:58.897 "compare_and_write": false, 00:18:58.897 "abort": true, 00:18:58.897 "seek_hole": false, 00:18:58.897 "seek_data": false, 00:18:58.897 "copy": true, 00:18:58.897 "nvme_iov_md": false 00:18:58.897 }, 00:18:58.897 "memory_domains": [ 00:18:58.897 { 00:18:58.897 "dma_device_id": "system", 00:18:58.897 "dma_device_type": 1 00:18:58.897 }, 00:18:58.897 { 00:18:58.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.897 "dma_device_type": 2 00:18:58.897 } 00:18:58.897 ], 00:18:58.897 "driver_specific": {} 00:18:58.897 } 00:18:58.897 ] 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.897 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.898 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.898 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.898 "name": "Existed_Raid", 00:18:58.898 "uuid": "22bd28f9-81da-4592-b83a-3324c4da3ad0", 00:18:58.898 "strip_size_kb": 0, 00:18:58.898 "state": "online", 00:18:58.898 "raid_level": "raid1", 00:18:58.898 "superblock": true, 00:18:58.898 "num_base_bdevs": 2, 00:18:58.898 "num_base_bdevs_discovered": 2, 00:18:58.898 "num_base_bdevs_operational": 2, 00:18:58.898 "base_bdevs_list": [ 00:18:58.898 { 00:18:58.898 "name": "BaseBdev1", 00:18:58.898 "uuid": "00b28056-5ba8-4040-a86f-f96216a83fba", 00:18:58.898 "is_configured": true, 00:18:58.898 "data_offset": 256, 00:18:58.898 "data_size": 7936 00:18:58.898 }, 00:18:58.898 { 00:18:58.898 "name": "BaseBdev2", 00:18:58.898 "uuid": "3ba846a3-0aa8-4f38-b332-9d497eda8b0d", 00:18:58.898 "is_configured": true, 00:18:58.898 "data_offset": 256, 00:18:58.898 "data_size": 7936 00:18:58.898 } 00:18:58.898 ] 00:18:58.898 }' 00:18:58.898 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.898 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.466 [2024-10-11 09:52:43.797448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.466 "name": "Existed_Raid", 00:18:59.466 "aliases": [ 00:18:59.466 "22bd28f9-81da-4592-b83a-3324c4da3ad0" 00:18:59.466 ], 00:18:59.466 "product_name": "Raid Volume", 00:18:59.466 "block_size": 4096, 00:18:59.466 "num_blocks": 7936, 00:18:59.466 "uuid": "22bd28f9-81da-4592-b83a-3324c4da3ad0", 00:18:59.466 "assigned_rate_limits": { 00:18:59.466 "rw_ios_per_sec": 0, 00:18:59.466 "rw_mbytes_per_sec": 0, 00:18:59.466 "r_mbytes_per_sec": 0, 00:18:59.466 "w_mbytes_per_sec": 0 00:18:59.466 }, 00:18:59.466 "claimed": false, 00:18:59.466 "zoned": false, 00:18:59.466 "supported_io_types": { 00:18:59.466 "read": true, 00:18:59.466 "write": true, 00:18:59.466 "unmap": false, 00:18:59.466 "flush": false, 00:18:59.466 "reset": true, 00:18:59.466 "nvme_admin": false, 00:18:59.466 "nvme_io": false, 00:18:59.466 "nvme_io_md": false, 00:18:59.466 "write_zeroes": true, 00:18:59.466 "zcopy": false, 00:18:59.466 "get_zone_info": false, 00:18:59.466 "zone_management": false, 00:18:59.466 "zone_append": false, 00:18:59.466 "compare": false, 00:18:59.466 "compare_and_write": false, 00:18:59.466 "abort": false, 00:18:59.466 "seek_hole": false, 00:18:59.466 "seek_data": false, 00:18:59.466 "copy": false, 00:18:59.466 "nvme_iov_md": false 00:18:59.466 }, 00:18:59.466 "memory_domains": [ 00:18:59.466 { 00:18:59.466 "dma_device_id": "system", 00:18:59.466 "dma_device_type": 1 00:18:59.466 }, 00:18:59.466 { 00:18:59.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.466 "dma_device_type": 2 00:18:59.466 }, 00:18:59.466 { 00:18:59.466 "dma_device_id": "system", 00:18:59.466 "dma_device_type": 1 00:18:59.466 }, 00:18:59.466 { 00:18:59.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.466 "dma_device_type": 2 00:18:59.466 } 00:18:59.466 ], 00:18:59.466 "driver_specific": { 00:18:59.466 "raid": { 00:18:59.466 "uuid": "22bd28f9-81da-4592-b83a-3324c4da3ad0", 00:18:59.466 "strip_size_kb": 0, 00:18:59.466 "state": "online", 00:18:59.466 "raid_level": "raid1", 00:18:59.466 "superblock": true, 00:18:59.466 "num_base_bdevs": 2, 00:18:59.466 "num_base_bdevs_discovered": 2, 00:18:59.466 "num_base_bdevs_operational": 2, 00:18:59.466 "base_bdevs_list": [ 00:18:59.466 { 00:18:59.466 "name": "BaseBdev1", 00:18:59.466 "uuid": "00b28056-5ba8-4040-a86f-f96216a83fba", 00:18:59.466 "is_configured": true, 00:18:59.466 "data_offset": 256, 00:18:59.466 "data_size": 7936 00:18:59.466 }, 00:18:59.466 { 00:18:59.466 "name": "BaseBdev2", 00:18:59.466 "uuid": "3ba846a3-0aa8-4f38-b332-9d497eda8b0d", 00:18:59.466 "is_configured": true, 00:18:59.466 "data_offset": 256, 00:18:59.466 "data_size": 7936 00:18:59.466 } 00:18:59.466 ] 00:18:59.466 } 00:18:59.466 } 00:18:59.466 }' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:59.466 BaseBdev2' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.466 09:52:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.466 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:59.466 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:59.466 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:59.466 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.466 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.466 [2024-10-11 09:52:44.036904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.725 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.726 "name": "Existed_Raid", 00:18:59.726 "uuid": "22bd28f9-81da-4592-b83a-3324c4da3ad0", 00:18:59.726 "strip_size_kb": 0, 00:18:59.726 "state": "online", 00:18:59.726 "raid_level": "raid1", 00:18:59.726 "superblock": true, 00:18:59.726 "num_base_bdevs": 2, 00:18:59.726 "num_base_bdevs_discovered": 1, 00:18:59.726 "num_base_bdevs_operational": 1, 00:18:59.726 "base_bdevs_list": [ 00:18:59.726 { 00:18:59.726 "name": null, 00:18:59.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.726 "is_configured": false, 00:18:59.726 "data_offset": 0, 00:18:59.726 "data_size": 7936 00:18:59.726 }, 00:18:59.726 { 00:18:59.726 "name": "BaseBdev2", 00:18:59.726 "uuid": "3ba846a3-0aa8-4f38-b332-9d497eda8b0d", 00:18:59.726 "is_configured": true, 00:18:59.726 "data_offset": 256, 00:18:59.726 "data_size": 7936 00:18:59.726 } 00:18:59.726 ] 00:18:59.726 }' 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.726 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.985 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.985 [2024-10-11 09:52:44.565357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:59.985 [2024-10-11 09:52:44.565461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.244 [2024-10-11 09:52:44.661981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.244 [2024-10-11 09:52:44.662040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.244 [2024-10-11 09:52:44.662052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86504 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86504 ']' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86504 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86504 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:00.245 killing process with pid 86504 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86504' 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86504 00:19:00.245 [2024-10-11 09:52:44.760899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.245 09:52:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86504 00:19:00.245 [2024-10-11 09:52:44.778494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.269 09:52:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:01.269 00:19:01.269 real 0m5.069s 00:19:01.269 user 0m7.268s 00:19:01.269 sys 0m0.888s 00:19:01.269 09:52:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.269 09:52:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.526 ************************************ 00:19:01.526 END TEST raid_state_function_test_sb_4k 00:19:01.526 ************************************ 00:19:01.526 09:52:45 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:01.526 09:52:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:01.526 09:52:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.526 09:52:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.526 ************************************ 00:19:01.526 START TEST raid_superblock_test_4k 00:19:01.526 ************************************ 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:01.526 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86756 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86756 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86756 ']' 00:19:01.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.527 09:52:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.527 [2024-10-11 09:52:46.068078] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:01.527 [2024-10-11 09:52:46.068214] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86756 ] 00:19:01.786 [2024-10-11 09:52:46.233258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.786 [2024-10-11 09:52:46.362087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.045 [2024-10-11 09:52:46.590124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.045 [2024-10-11 09:52:46.590183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.305 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.567 malloc1 00:19:02.567 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.567 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.567 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.567 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.567 [2024-10-11 09:52:46.989548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.567 [2024-10-11 09:52:46.989687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.567 [2024-10-11 09:52:46.989758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.567 [2024-10-11 09:52:46.989800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.567 [2024-10-11 09:52:46.992248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.568 [2024-10-11 09:52:46.992331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.568 pt1 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.568 09:52:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.568 malloc2 00:19:02.568 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.569 [2024-10-11 09:52:47.054093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.569 [2024-10-11 09:52:47.054158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.569 [2024-10-11 09:52:47.054184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:02.569 [2024-10-11 09:52:47.054193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.569 [2024-10-11 09:52:47.056308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.569 [2024-10-11 09:52:47.056419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.569 pt2 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:02.569 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.570 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.570 [2024-10-11 09:52:47.066147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.570 [2024-10-11 09:52:47.068295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.570 [2024-10-11 09:52:47.068509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:02.570 [2024-10-11 09:52:47.068527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.570 [2024-10-11 09:52:47.068846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:02.570 [2024-10-11 09:52:47.069044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:02.570 [2024-10-11 09:52:47.069081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:02.570 [2024-10-11 09:52:47.069268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.570 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.570 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.570 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.571 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.572 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.572 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.572 "name": "raid_bdev1", 00:19:02.572 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:02.572 "strip_size_kb": 0, 00:19:02.572 "state": "online", 00:19:02.572 "raid_level": "raid1", 00:19:02.572 "superblock": true, 00:19:02.572 "num_base_bdevs": 2, 00:19:02.572 "num_base_bdevs_discovered": 2, 00:19:02.572 "num_base_bdevs_operational": 2, 00:19:02.572 "base_bdevs_list": [ 00:19:02.572 { 00:19:02.572 "name": "pt1", 00:19:02.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.572 "is_configured": true, 00:19:02.572 "data_offset": 256, 00:19:02.572 "data_size": 7936 00:19:02.572 }, 00:19:02.572 { 00:19:02.572 "name": "pt2", 00:19:02.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.572 "is_configured": true, 00:19:02.572 "data_offset": 256, 00:19:02.572 "data_size": 7936 00:19:02.572 } 00:19:02.572 ] 00:19:02.573 }' 00:19:02.573 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.573 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.146 [2024-10-11 09:52:47.577562] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.146 "name": "raid_bdev1", 00:19:03.146 "aliases": [ 00:19:03.146 "887d031e-02ff-42f7-99ab-9565b29b2422" 00:19:03.146 ], 00:19:03.146 "product_name": "Raid Volume", 00:19:03.146 "block_size": 4096, 00:19:03.146 "num_blocks": 7936, 00:19:03.146 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:03.146 "assigned_rate_limits": { 00:19:03.146 "rw_ios_per_sec": 0, 00:19:03.146 "rw_mbytes_per_sec": 0, 00:19:03.146 "r_mbytes_per_sec": 0, 00:19:03.146 "w_mbytes_per_sec": 0 00:19:03.146 }, 00:19:03.146 "claimed": false, 00:19:03.146 "zoned": false, 00:19:03.146 "supported_io_types": { 00:19:03.146 "read": true, 00:19:03.146 "write": true, 00:19:03.146 "unmap": false, 00:19:03.146 "flush": false, 00:19:03.146 "reset": true, 00:19:03.146 "nvme_admin": false, 00:19:03.146 "nvme_io": false, 00:19:03.146 "nvme_io_md": false, 00:19:03.146 "write_zeroes": true, 00:19:03.146 "zcopy": false, 00:19:03.146 "get_zone_info": false, 00:19:03.146 "zone_management": false, 00:19:03.146 "zone_append": false, 00:19:03.146 "compare": false, 00:19:03.146 "compare_and_write": false, 00:19:03.146 "abort": false, 00:19:03.146 "seek_hole": false, 00:19:03.146 "seek_data": false, 00:19:03.146 "copy": false, 00:19:03.146 "nvme_iov_md": false 00:19:03.146 }, 00:19:03.146 "memory_domains": [ 00:19:03.146 { 00:19:03.146 "dma_device_id": "system", 00:19:03.146 "dma_device_type": 1 00:19:03.146 }, 00:19:03.146 { 00:19:03.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.146 "dma_device_type": 2 00:19:03.146 }, 00:19:03.146 { 00:19:03.146 "dma_device_id": "system", 00:19:03.146 "dma_device_type": 1 00:19:03.146 }, 00:19:03.146 { 00:19:03.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.146 "dma_device_type": 2 00:19:03.146 } 00:19:03.146 ], 00:19:03.146 "driver_specific": { 00:19:03.146 "raid": { 00:19:03.146 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:03.146 "strip_size_kb": 0, 00:19:03.146 "state": "online", 00:19:03.146 "raid_level": "raid1", 00:19:03.146 "superblock": true, 00:19:03.146 "num_base_bdevs": 2, 00:19:03.146 "num_base_bdevs_discovered": 2, 00:19:03.146 "num_base_bdevs_operational": 2, 00:19:03.146 "base_bdevs_list": [ 00:19:03.146 { 00:19:03.146 "name": "pt1", 00:19:03.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.146 "is_configured": true, 00:19:03.146 "data_offset": 256, 00:19:03.146 "data_size": 7936 00:19:03.146 }, 00:19:03.146 { 00:19:03.146 "name": "pt2", 00:19:03.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.146 "is_configured": true, 00:19:03.146 "data_offset": 256, 00:19:03.146 "data_size": 7936 00:19:03.146 } 00:19:03.146 ] 00:19:03.146 } 00:19:03.146 } 00:19:03.146 }' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:03.146 pt2' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.146 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 [2024-10-11 09:52:47.825130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=887d031e-02ff-42f7-99ab-9565b29b2422 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 887d031e-02ff-42f7-99ab-9565b29b2422 ']' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 [2024-10-11 09:52:47.868769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.406 [2024-10-11 09:52:47.868800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.406 [2024-10-11 09:52:47.868899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.406 [2024-10-11 09:52:47.868968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.406 [2024-10-11 09:52:47.868981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:03.406 09:52:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 [2024-10-11 09:52:48.012518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:03.406 [2024-10-11 09:52:48.014524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:03.406 [2024-10-11 09:52:48.014638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:03.406 [2024-10-11 09:52:48.014741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:03.406 [2024-10-11 09:52:48.014821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.406 [2024-10-11 09:52:48.014891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:03.406 request: 00:19:03.406 { 00:19:03.406 "name": "raid_bdev1", 00:19:03.406 "raid_level": "raid1", 00:19:03.406 "base_bdevs": [ 00:19:03.406 "malloc1", 00:19:03.406 "malloc2" 00:19:03.406 ], 00:19:03.406 "superblock": false, 00:19:03.406 "method": "bdev_raid_create", 00:19:03.406 "req_id": 1 00:19:03.406 } 00:19:03.406 Got JSON-RPC error response 00:19:03.406 response: 00:19:03.406 { 00:19:03.406 "code": -17, 00:19:03.406 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:03.406 } 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.666 [2024-10-11 09:52:48.084409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:03.666 [2024-10-11 09:52:48.084551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.666 [2024-10-11 09:52:48.084577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:03.666 [2024-10-11 09:52:48.084590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.666 [2024-10-11 09:52:48.087155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.666 [2024-10-11 09:52:48.087196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:03.666 [2024-10-11 09:52:48.087303] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:03.666 [2024-10-11 09:52:48.087385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.666 pt1 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.666 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.667 "name": "raid_bdev1", 00:19:03.667 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:03.667 "strip_size_kb": 0, 00:19:03.667 "state": "configuring", 00:19:03.667 "raid_level": "raid1", 00:19:03.667 "superblock": true, 00:19:03.667 "num_base_bdevs": 2, 00:19:03.667 "num_base_bdevs_discovered": 1, 00:19:03.667 "num_base_bdevs_operational": 2, 00:19:03.667 "base_bdevs_list": [ 00:19:03.667 { 00:19:03.667 "name": "pt1", 00:19:03.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.667 "is_configured": true, 00:19:03.667 "data_offset": 256, 00:19:03.667 "data_size": 7936 00:19:03.667 }, 00:19:03.667 { 00:19:03.667 "name": null, 00:19:03.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.667 "is_configured": false, 00:19:03.667 "data_offset": 256, 00:19:03.667 "data_size": 7936 00:19:03.667 } 00:19:03.667 ] 00:19:03.667 }' 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.667 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.926 [2024-10-11 09:52:48.491764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.926 [2024-10-11 09:52:48.491904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.926 [2024-10-11 09:52:48.491932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:03.926 [2024-10-11 09:52:48.491944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.926 [2024-10-11 09:52:48.492433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.926 [2024-10-11 09:52:48.492454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.926 [2024-10-11 09:52:48.492540] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.926 [2024-10-11 09:52:48.492565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.926 [2024-10-11 09:52:48.492685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:03.926 [2024-10-11 09:52:48.492695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:03.926 [2024-10-11 09:52:48.492932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:03.926 [2024-10-11 09:52:48.493099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:03.926 [2024-10-11 09:52:48.493108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:03.926 [2024-10-11 09:52:48.493254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.926 pt2 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.926 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.927 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.927 "name": "raid_bdev1", 00:19:03.927 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:03.927 "strip_size_kb": 0, 00:19:03.927 "state": "online", 00:19:03.927 "raid_level": "raid1", 00:19:03.927 "superblock": true, 00:19:03.927 "num_base_bdevs": 2, 00:19:03.927 "num_base_bdevs_discovered": 2, 00:19:03.927 "num_base_bdevs_operational": 2, 00:19:03.927 "base_bdevs_list": [ 00:19:03.927 { 00:19:03.927 "name": "pt1", 00:19:03.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.927 "is_configured": true, 00:19:03.927 "data_offset": 256, 00:19:03.927 "data_size": 7936 00:19:03.927 }, 00:19:03.927 { 00:19:03.927 "name": "pt2", 00:19:03.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.927 "is_configured": true, 00:19:03.927 "data_offset": 256, 00:19:03.927 "data_size": 7936 00:19:03.927 } 00:19:03.927 ] 00:19:03.927 }' 00:19:03.927 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.927 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 [2024-10-11 09:52:48.935234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.504 "name": "raid_bdev1", 00:19:04.504 "aliases": [ 00:19:04.504 "887d031e-02ff-42f7-99ab-9565b29b2422" 00:19:04.504 ], 00:19:04.504 "product_name": "Raid Volume", 00:19:04.504 "block_size": 4096, 00:19:04.504 "num_blocks": 7936, 00:19:04.504 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:04.504 "assigned_rate_limits": { 00:19:04.504 "rw_ios_per_sec": 0, 00:19:04.504 "rw_mbytes_per_sec": 0, 00:19:04.504 "r_mbytes_per_sec": 0, 00:19:04.504 "w_mbytes_per_sec": 0 00:19:04.504 }, 00:19:04.504 "claimed": false, 00:19:04.504 "zoned": false, 00:19:04.504 "supported_io_types": { 00:19:04.504 "read": true, 00:19:04.504 "write": true, 00:19:04.504 "unmap": false, 00:19:04.504 "flush": false, 00:19:04.504 "reset": true, 00:19:04.504 "nvme_admin": false, 00:19:04.504 "nvme_io": false, 00:19:04.504 "nvme_io_md": false, 00:19:04.504 "write_zeroes": true, 00:19:04.504 "zcopy": false, 00:19:04.504 "get_zone_info": false, 00:19:04.504 "zone_management": false, 00:19:04.504 "zone_append": false, 00:19:04.504 "compare": false, 00:19:04.504 "compare_and_write": false, 00:19:04.504 "abort": false, 00:19:04.504 "seek_hole": false, 00:19:04.504 "seek_data": false, 00:19:04.504 "copy": false, 00:19:04.504 "nvme_iov_md": false 00:19:04.504 }, 00:19:04.504 "memory_domains": [ 00:19:04.504 { 00:19:04.504 "dma_device_id": "system", 00:19:04.504 "dma_device_type": 1 00:19:04.504 }, 00:19:04.504 { 00:19:04.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.504 "dma_device_type": 2 00:19:04.504 }, 00:19:04.504 { 00:19:04.504 "dma_device_id": "system", 00:19:04.504 "dma_device_type": 1 00:19:04.504 }, 00:19:04.504 { 00:19:04.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.504 "dma_device_type": 2 00:19:04.504 } 00:19:04.504 ], 00:19:04.504 "driver_specific": { 00:19:04.504 "raid": { 00:19:04.504 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:04.504 "strip_size_kb": 0, 00:19:04.504 "state": "online", 00:19:04.504 "raid_level": "raid1", 00:19:04.504 "superblock": true, 00:19:04.504 "num_base_bdevs": 2, 00:19:04.504 "num_base_bdevs_discovered": 2, 00:19:04.504 "num_base_bdevs_operational": 2, 00:19:04.504 "base_bdevs_list": [ 00:19:04.504 { 00:19:04.504 "name": "pt1", 00:19:04.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.504 "is_configured": true, 00:19:04.504 "data_offset": 256, 00:19:04.504 "data_size": 7936 00:19:04.504 }, 00:19:04.504 { 00:19:04.504 "name": "pt2", 00:19:04.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.504 "is_configured": true, 00:19:04.504 "data_offset": 256, 00:19:04.504 "data_size": 7936 00:19:04.504 } 00:19:04.504 ] 00:19:04.504 } 00:19:04.504 } 00:19:04.504 }' 00:19:04.504 09:52:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:04.504 pt2' 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.769 [2024-10-11 09:52:49.186782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 887d031e-02ff-42f7-99ab-9565b29b2422 '!=' 887d031e-02ff-42f7-99ab-9565b29b2422 ']' 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.769 [2024-10-11 09:52:49.230526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.769 "name": "raid_bdev1", 00:19:04.769 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:04.769 "strip_size_kb": 0, 00:19:04.769 "state": "online", 00:19:04.769 "raid_level": "raid1", 00:19:04.769 "superblock": true, 00:19:04.769 "num_base_bdevs": 2, 00:19:04.769 "num_base_bdevs_discovered": 1, 00:19:04.769 "num_base_bdevs_operational": 1, 00:19:04.769 "base_bdevs_list": [ 00:19:04.769 { 00:19:04.769 "name": null, 00:19:04.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.769 "is_configured": false, 00:19:04.769 "data_offset": 0, 00:19:04.769 "data_size": 7936 00:19:04.769 }, 00:19:04.769 { 00:19:04.769 "name": "pt2", 00:19:04.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.769 "is_configured": true, 00:19:04.769 "data_offset": 256, 00:19:04.769 "data_size": 7936 00:19:04.769 } 00:19:04.769 ] 00:19:04.769 }' 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.769 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 [2024-10-11 09:52:49.681793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.338 [2024-10-11 09:52:49.681874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.338 [2024-10-11 09:52:49.682008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.338 [2024-10-11 09:52:49.682091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.338 [2024-10-11 09:52:49.682143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 [2024-10-11 09:52:49.757650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.338 [2024-10-11 09:52:49.757799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.338 [2024-10-11 09:52:49.757847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:05.338 [2024-10-11 09:52:49.757889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.338 [2024-10-11 09:52:49.760421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.338 [2024-10-11 09:52:49.760508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.338 [2024-10-11 09:52:49.760632] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.338 [2024-10-11 09:52:49.760694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.338 [2024-10-11 09:52:49.760830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:05.338 [2024-10-11 09:52:49.760845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:05.338 [2024-10-11 09:52:49.761095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:05.338 [2024-10-11 09:52:49.761265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:05.338 [2024-10-11 09:52:49.761275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:05.338 [2024-10-11 09:52:49.761429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.338 pt2 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.338 "name": "raid_bdev1", 00:19:05.338 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:05.338 "strip_size_kb": 0, 00:19:05.338 "state": "online", 00:19:05.338 "raid_level": "raid1", 00:19:05.338 "superblock": true, 00:19:05.338 "num_base_bdevs": 2, 00:19:05.338 "num_base_bdevs_discovered": 1, 00:19:05.338 "num_base_bdevs_operational": 1, 00:19:05.338 "base_bdevs_list": [ 00:19:05.338 { 00:19:05.338 "name": null, 00:19:05.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.338 "is_configured": false, 00:19:05.338 "data_offset": 256, 00:19:05.338 "data_size": 7936 00:19:05.338 }, 00:19:05.338 { 00:19:05.338 "name": "pt2", 00:19:05.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.338 "is_configured": true, 00:19:05.338 "data_offset": 256, 00:19:05.338 "data_size": 7936 00:19:05.338 } 00:19:05.338 ] 00:19:05.338 }' 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.338 09:52:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.597 [2024-10-11 09:52:50.168993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.597 [2024-10-11 09:52:50.169029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.597 [2024-10-11 09:52:50.169118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.597 [2024-10-11 09:52:50.169184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.597 [2024-10-11 09:52:50.169194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.597 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.597 [2024-10-11 09:52:50.220907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.597 [2024-10-11 09:52:50.220977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.597 [2024-10-11 09:52:50.221015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:05.597 [2024-10-11 09:52:50.221025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.597 [2024-10-11 09:52:50.223264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.597 [2024-10-11 09:52:50.223303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.597 [2024-10-11 09:52:50.223400] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:05.597 [2024-10-11 09:52:50.223447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.597 [2024-10-11 09:52:50.223584] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:05.597 [2024-10-11 09:52:50.223594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.597 [2024-10-11 09:52:50.223609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:05.597 [2024-10-11 09:52:50.223710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.597 [2024-10-11 09:52:50.223830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:05.597 [2024-10-11 09:52:50.223840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:05.597 [2024-10-11 09:52:50.224093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:05.597 [2024-10-11 09:52:50.224335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:05.597 [2024-10-11 09:52:50.224354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:05.597 [2024-10-11 09:52:50.224522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.597 pt1 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.856 "name": "raid_bdev1", 00:19:05.856 "uuid": "887d031e-02ff-42f7-99ab-9565b29b2422", 00:19:05.856 "strip_size_kb": 0, 00:19:05.856 "state": "online", 00:19:05.856 "raid_level": "raid1", 00:19:05.856 "superblock": true, 00:19:05.856 "num_base_bdevs": 2, 00:19:05.856 "num_base_bdevs_discovered": 1, 00:19:05.856 "num_base_bdevs_operational": 1, 00:19:05.856 "base_bdevs_list": [ 00:19:05.856 { 00:19:05.856 "name": null, 00:19:05.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.856 "is_configured": false, 00:19:05.856 "data_offset": 256, 00:19:05.856 "data_size": 7936 00:19:05.856 }, 00:19:05.856 { 00:19:05.856 "name": "pt2", 00:19:05.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.856 "is_configured": true, 00:19:05.856 "data_offset": 256, 00:19:05.856 "data_size": 7936 00:19:05.856 } 00:19:05.856 ] 00:19:05.856 }' 00:19:05.856 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.857 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:06.120 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.379 [2024-10-11 09:52:50.756188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 887d031e-02ff-42f7-99ab-9565b29b2422 '!=' 887d031e-02ff-42f7-99ab-9565b29b2422 ']' 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86756 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86756 ']' 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86756 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86756 00:19:06.379 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:06.380 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:06.380 killing process with pid 86756 00:19:06.380 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86756' 00:19:06.380 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86756 00:19:06.380 [2024-10-11 09:52:50.822682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.380 [2024-10-11 09:52:50.822801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.380 [2024-10-11 09:52:50.822849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.380 [2024-10-11 09:52:50.822866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:06.380 09:52:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86756 00:19:06.640 [2024-10-11 09:52:51.018461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.582 09:52:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:07.582 ************************************ 00:19:07.582 END TEST raid_superblock_test_4k 00:19:07.582 ************************************ 00:19:07.582 00:19:07.582 real 0m6.116s 00:19:07.582 user 0m9.269s 00:19:07.582 sys 0m1.166s 00:19:07.582 09:52:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.582 09:52:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 09:52:52 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:07.582 09:52:52 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:07.582 09:52:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:07.582 09:52:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.582 09:52:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 ************************************ 00:19:07.582 START TEST raid_rebuild_test_sb_4k 00:19:07.582 ************************************ 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87079 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87079 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 87079 ']' 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.582 09:52:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.841 [2024-10-11 09:52:52.263572] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:07.841 [2024-10-11 09:52:52.263796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87079 ] 00:19:07.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:07.841 Zero copy mechanism will not be used. 00:19:07.841 [2024-10-11 09:52:52.427946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.100 [2024-10-11 09:52:52.548546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.360 [2024-10-11 09:52:52.764680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.360 [2024-10-11 09:52:52.764793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.620 BaseBdev1_malloc 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.620 [2024-10-11 09:52:53.152816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.620 [2024-10-11 09:52:53.152942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.620 [2024-10-11 09:52:53.152969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:08.620 [2024-10-11 09:52:53.152980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.620 [2024-10-11 09:52:53.155003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.620 [2024-10-11 09:52:53.155042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.620 BaseBdev1 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.620 BaseBdev2_malloc 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.620 [2024-10-11 09:52:53.210378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:08.620 [2024-10-11 09:52:53.210433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.620 [2024-10-11 09:52:53.210467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:08.620 [2024-10-11 09:52:53.210478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.620 [2024-10-11 09:52:53.212459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.620 [2024-10-11 09:52:53.212497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:08.620 BaseBdev2 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.620 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.880 spare_malloc 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.880 spare_delay 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.880 [2024-10-11 09:52:53.288109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.880 [2024-10-11 09:52:53.288180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.880 [2024-10-11 09:52:53.288200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:08.880 [2024-10-11 09:52:53.288211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.880 [2024-10-11 09:52:53.290197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.880 [2024-10-11 09:52:53.290236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.880 spare 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.880 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.880 [2024-10-11 09:52:53.300129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.880 [2024-10-11 09:52:53.301912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.880 [2024-10-11 09:52:53.302095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:08.881 [2024-10-11 09:52:53.302110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:08.881 [2024-10-11 09:52:53.302344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:08.881 [2024-10-11 09:52:53.302500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:08.881 [2024-10-11 09:52:53.302508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:08.881 [2024-10-11 09:52:53.302634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.881 "name": "raid_bdev1", 00:19:08.881 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:08.881 "strip_size_kb": 0, 00:19:08.881 "state": "online", 00:19:08.881 "raid_level": "raid1", 00:19:08.881 "superblock": true, 00:19:08.881 "num_base_bdevs": 2, 00:19:08.881 "num_base_bdevs_discovered": 2, 00:19:08.881 "num_base_bdevs_operational": 2, 00:19:08.881 "base_bdevs_list": [ 00:19:08.881 { 00:19:08.881 "name": "BaseBdev1", 00:19:08.881 "uuid": "4a56a64d-ec25-510d-ae50-f6d0054e0e44", 00:19:08.881 "is_configured": true, 00:19:08.881 "data_offset": 256, 00:19:08.881 "data_size": 7936 00:19:08.881 }, 00:19:08.881 { 00:19:08.881 "name": "BaseBdev2", 00:19:08.881 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:08.881 "is_configured": true, 00:19:08.881 "data_offset": 256, 00:19:08.881 "data_size": 7936 00:19:08.881 } 00:19:08.881 ] 00:19:08.881 }' 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.881 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:09.450 [2024-10-11 09:52:53.815567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.450 09:52:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:09.710 [2024-10-11 09:52:54.090910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:09.710 /dev/nbd0 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.710 1+0 records in 00:19:09.710 1+0 records out 00:19:09.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516986 s, 7.9 MB/s 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:09.710 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:10.279 7936+0 records in 00:19:10.279 7936+0 records out 00:19:10.279 32505856 bytes (33 MB, 31 MiB) copied, 0.568486 s, 57.2 MB/s 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.279 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:10.539 [2024-10-11 09:52:54.919301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.539 [2024-10-11 09:52:54.952267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.539 09:52:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.539 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.539 "name": "raid_bdev1", 00:19:10.539 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:10.539 "strip_size_kb": 0, 00:19:10.539 "state": "online", 00:19:10.539 "raid_level": "raid1", 00:19:10.539 "superblock": true, 00:19:10.539 "num_base_bdevs": 2, 00:19:10.539 "num_base_bdevs_discovered": 1, 00:19:10.539 "num_base_bdevs_operational": 1, 00:19:10.539 "base_bdevs_list": [ 00:19:10.539 { 00:19:10.539 "name": null, 00:19:10.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.539 "is_configured": false, 00:19:10.539 "data_offset": 0, 00:19:10.539 "data_size": 7936 00:19:10.539 }, 00:19:10.539 { 00:19:10.539 "name": "BaseBdev2", 00:19:10.539 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:10.539 "is_configured": true, 00:19:10.539 "data_offset": 256, 00:19:10.539 "data_size": 7936 00:19:10.539 } 00:19:10.539 ] 00:19:10.539 }' 00:19:10.539 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.539 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.799 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.799 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.799 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.799 [2024-10-11 09:52:55.395536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.799 [2024-10-11 09:52:55.413870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:10.799 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.799 09:52:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:10.799 [2024-10-11 09:52:55.415713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.175 "name": "raid_bdev1", 00:19:12.175 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:12.175 "strip_size_kb": 0, 00:19:12.175 "state": "online", 00:19:12.175 "raid_level": "raid1", 00:19:12.175 "superblock": true, 00:19:12.175 "num_base_bdevs": 2, 00:19:12.175 "num_base_bdevs_discovered": 2, 00:19:12.175 "num_base_bdevs_operational": 2, 00:19:12.175 "process": { 00:19:12.175 "type": "rebuild", 00:19:12.175 "target": "spare", 00:19:12.175 "progress": { 00:19:12.175 "blocks": 2560, 00:19:12.175 "percent": 32 00:19:12.175 } 00:19:12.175 }, 00:19:12.175 "base_bdevs_list": [ 00:19:12.175 { 00:19:12.175 "name": "spare", 00:19:12.175 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:12.175 "is_configured": true, 00:19:12.175 "data_offset": 256, 00:19:12.175 "data_size": 7936 00:19:12.175 }, 00:19:12.175 { 00:19:12.175 "name": "BaseBdev2", 00:19:12.175 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:12.175 "is_configured": true, 00:19:12.175 "data_offset": 256, 00:19:12.175 "data_size": 7936 00:19:12.175 } 00:19:12.175 ] 00:19:12.175 }' 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.175 [2024-10-11 09:52:56.559326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.175 [2024-10-11 09:52:56.620977] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:12.175 [2024-10-11 09:52:56.621109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.175 [2024-10-11 09:52:56.621126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.175 [2024-10-11 09:52:56.621136] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.175 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.175 "name": "raid_bdev1", 00:19:12.175 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:12.175 "strip_size_kb": 0, 00:19:12.175 "state": "online", 00:19:12.175 "raid_level": "raid1", 00:19:12.175 "superblock": true, 00:19:12.175 "num_base_bdevs": 2, 00:19:12.175 "num_base_bdevs_discovered": 1, 00:19:12.175 "num_base_bdevs_operational": 1, 00:19:12.175 "base_bdevs_list": [ 00:19:12.175 { 00:19:12.175 "name": null, 00:19:12.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.175 "is_configured": false, 00:19:12.175 "data_offset": 0, 00:19:12.175 "data_size": 7936 00:19:12.175 }, 00:19:12.175 { 00:19:12.176 "name": "BaseBdev2", 00:19:12.176 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:12.176 "is_configured": true, 00:19:12.176 "data_offset": 256, 00:19:12.176 "data_size": 7936 00:19:12.176 } 00:19:12.176 ] 00:19:12.176 }' 00:19:12.176 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.176 09:52:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.743 "name": "raid_bdev1", 00:19:12.743 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:12.743 "strip_size_kb": 0, 00:19:12.743 "state": "online", 00:19:12.743 "raid_level": "raid1", 00:19:12.743 "superblock": true, 00:19:12.743 "num_base_bdevs": 2, 00:19:12.743 "num_base_bdevs_discovered": 1, 00:19:12.743 "num_base_bdevs_operational": 1, 00:19:12.743 "base_bdevs_list": [ 00:19:12.743 { 00:19:12.743 "name": null, 00:19:12.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.743 "is_configured": false, 00:19:12.743 "data_offset": 0, 00:19:12.743 "data_size": 7936 00:19:12.743 }, 00:19:12.743 { 00:19:12.743 "name": "BaseBdev2", 00:19:12.743 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:12.743 "is_configured": true, 00:19:12.743 "data_offset": 256, 00:19:12.743 "data_size": 7936 00:19:12.743 } 00:19:12.743 ] 00:19:12.743 }' 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.743 [2024-10-11 09:52:57.240712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.743 [2024-10-11 09:52:57.258495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.743 09:52:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:12.743 [2024-10-11 09:52:57.260523] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.678 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.937 "name": "raid_bdev1", 00:19:13.937 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:13.937 "strip_size_kb": 0, 00:19:13.937 "state": "online", 00:19:13.937 "raid_level": "raid1", 00:19:13.937 "superblock": true, 00:19:13.937 "num_base_bdevs": 2, 00:19:13.937 "num_base_bdevs_discovered": 2, 00:19:13.937 "num_base_bdevs_operational": 2, 00:19:13.937 "process": { 00:19:13.937 "type": "rebuild", 00:19:13.937 "target": "spare", 00:19:13.937 "progress": { 00:19:13.937 "blocks": 2560, 00:19:13.937 "percent": 32 00:19:13.937 } 00:19:13.937 }, 00:19:13.937 "base_bdevs_list": [ 00:19:13.937 { 00:19:13.937 "name": "spare", 00:19:13.937 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:13.937 "is_configured": true, 00:19:13.937 "data_offset": 256, 00:19:13.937 "data_size": 7936 00:19:13.937 }, 00:19:13.937 { 00:19:13.937 "name": "BaseBdev2", 00:19:13.937 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:13.937 "is_configured": true, 00:19:13.937 "data_offset": 256, 00:19:13.937 "data_size": 7936 00:19:13.937 } 00:19:13.937 ] 00:19:13.937 }' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:13.937 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=694 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.937 "name": "raid_bdev1", 00:19:13.937 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:13.937 "strip_size_kb": 0, 00:19:13.937 "state": "online", 00:19:13.937 "raid_level": "raid1", 00:19:13.937 "superblock": true, 00:19:13.937 "num_base_bdevs": 2, 00:19:13.937 "num_base_bdevs_discovered": 2, 00:19:13.937 "num_base_bdevs_operational": 2, 00:19:13.937 "process": { 00:19:13.937 "type": "rebuild", 00:19:13.937 "target": "spare", 00:19:13.937 "progress": { 00:19:13.937 "blocks": 2816, 00:19:13.937 "percent": 35 00:19:13.937 } 00:19:13.937 }, 00:19:13.937 "base_bdevs_list": [ 00:19:13.937 { 00:19:13.937 "name": "spare", 00:19:13.937 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:13.937 "is_configured": true, 00:19:13.937 "data_offset": 256, 00:19:13.937 "data_size": 7936 00:19:13.937 }, 00:19:13.937 { 00:19:13.937 "name": "BaseBdev2", 00:19:13.937 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:13.937 "is_configured": true, 00:19:13.937 "data_offset": 256, 00:19:13.937 "data_size": 7936 00:19:13.937 } 00:19:13.937 ] 00:19:13.937 }' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.937 09:52:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.314 "name": "raid_bdev1", 00:19:15.314 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:15.314 "strip_size_kb": 0, 00:19:15.314 "state": "online", 00:19:15.314 "raid_level": "raid1", 00:19:15.314 "superblock": true, 00:19:15.314 "num_base_bdevs": 2, 00:19:15.314 "num_base_bdevs_discovered": 2, 00:19:15.314 "num_base_bdevs_operational": 2, 00:19:15.314 "process": { 00:19:15.314 "type": "rebuild", 00:19:15.314 "target": "spare", 00:19:15.314 "progress": { 00:19:15.314 "blocks": 5632, 00:19:15.314 "percent": 70 00:19:15.314 } 00:19:15.314 }, 00:19:15.314 "base_bdevs_list": [ 00:19:15.314 { 00:19:15.314 "name": "spare", 00:19:15.314 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:15.314 "is_configured": true, 00:19:15.314 "data_offset": 256, 00:19:15.314 "data_size": 7936 00:19:15.314 }, 00:19:15.314 { 00:19:15.314 "name": "BaseBdev2", 00:19:15.314 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:15.314 "is_configured": true, 00:19:15.314 "data_offset": 256, 00:19:15.314 "data_size": 7936 00:19:15.314 } 00:19:15.314 ] 00:19:15.314 }' 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.314 09:52:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.881 [2024-10-11 09:53:00.374892] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:15.881 [2024-10-11 09:53:00.374989] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:15.881 [2024-10-11 09:53:00.375112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.140 "name": "raid_bdev1", 00:19:16.140 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:16.140 "strip_size_kb": 0, 00:19:16.140 "state": "online", 00:19:16.140 "raid_level": "raid1", 00:19:16.140 "superblock": true, 00:19:16.140 "num_base_bdevs": 2, 00:19:16.140 "num_base_bdevs_discovered": 2, 00:19:16.140 "num_base_bdevs_operational": 2, 00:19:16.140 "base_bdevs_list": [ 00:19:16.140 { 00:19:16.140 "name": "spare", 00:19:16.140 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:16.140 "is_configured": true, 00:19:16.140 "data_offset": 256, 00:19:16.140 "data_size": 7936 00:19:16.140 }, 00:19:16.140 { 00:19:16.140 "name": "BaseBdev2", 00:19:16.140 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:16.140 "is_configured": true, 00:19:16.140 "data_offset": 256, 00:19:16.140 "data_size": 7936 00:19:16.140 } 00:19:16.140 ] 00:19:16.140 }' 00:19:16.140 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.399 "name": "raid_bdev1", 00:19:16.399 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:16.399 "strip_size_kb": 0, 00:19:16.399 "state": "online", 00:19:16.399 "raid_level": "raid1", 00:19:16.399 "superblock": true, 00:19:16.399 "num_base_bdevs": 2, 00:19:16.399 "num_base_bdevs_discovered": 2, 00:19:16.399 "num_base_bdevs_operational": 2, 00:19:16.399 "base_bdevs_list": [ 00:19:16.399 { 00:19:16.399 "name": "spare", 00:19:16.399 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:16.399 "is_configured": true, 00:19:16.399 "data_offset": 256, 00:19:16.399 "data_size": 7936 00:19:16.399 }, 00:19:16.399 { 00:19:16.399 "name": "BaseBdev2", 00:19:16.399 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:16.399 "is_configured": true, 00:19:16.399 "data_offset": 256, 00:19:16.399 "data_size": 7936 00:19:16.399 } 00:19:16.399 ] 00:19:16.399 }' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.399 09:53:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.399 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.399 "name": "raid_bdev1", 00:19:16.399 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:16.399 "strip_size_kb": 0, 00:19:16.399 "state": "online", 00:19:16.399 "raid_level": "raid1", 00:19:16.399 "superblock": true, 00:19:16.399 "num_base_bdevs": 2, 00:19:16.399 "num_base_bdevs_discovered": 2, 00:19:16.399 "num_base_bdevs_operational": 2, 00:19:16.399 "base_bdevs_list": [ 00:19:16.399 { 00:19:16.399 "name": "spare", 00:19:16.399 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:16.399 "is_configured": true, 00:19:16.399 "data_offset": 256, 00:19:16.399 "data_size": 7936 00:19:16.399 }, 00:19:16.399 { 00:19:16.399 "name": "BaseBdev2", 00:19:16.399 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:16.399 "is_configured": true, 00:19:16.399 "data_offset": 256, 00:19:16.399 "data_size": 7936 00:19:16.399 } 00:19:16.399 ] 00:19:16.399 }' 00:19:16.399 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.399 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.965 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.966 [2024-10-11 09:53:01.382582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.966 [2024-10-11 09:53:01.382627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.966 [2024-10-11 09:53:01.382712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.966 [2024-10-11 09:53:01.382807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.966 [2024-10-11 09:53:01.382820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.966 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:17.224 /dev/nbd0 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.224 1+0 records in 00:19:17.224 1+0 records out 00:19:17.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419981 s, 9.8 MB/s 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:17.224 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:17.482 /dev/nbd1 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:17.482 1+0 records in 00:19:17.482 1+0 records out 00:19:17.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472474 s, 8.7 MB/s 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:17.482 09:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.741 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.000 [2024-10-11 09:53:02.606156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.000 [2024-10-11 09:53:02.606219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.000 [2024-10-11 09:53:02.606242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:18.000 [2024-10-11 09:53:02.606252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.000 [2024-10-11 09:53:02.608609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.000 [2024-10-11 09:53:02.608649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.000 [2024-10-11 09:53:02.608766] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.000 [2024-10-11 09:53:02.608836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.000 [2024-10-11 09:53:02.609048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.000 spare 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.000 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 [2024-10-11 09:53:02.708976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:18.259 [2024-10-11 09:53:02.709032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:18.259 [2024-10-11 09:53:02.709376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:18.259 [2024-10-11 09:53:02.709580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:18.259 [2024-10-11 09:53:02.709595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:18.259 [2024-10-11 09:53:02.709795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.259 "name": "raid_bdev1", 00:19:18.259 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:18.259 "strip_size_kb": 0, 00:19:18.259 "state": "online", 00:19:18.259 "raid_level": "raid1", 00:19:18.259 "superblock": true, 00:19:18.259 "num_base_bdevs": 2, 00:19:18.259 "num_base_bdevs_discovered": 2, 00:19:18.259 "num_base_bdevs_operational": 2, 00:19:18.259 "base_bdevs_list": [ 00:19:18.259 { 00:19:18.259 "name": "spare", 00:19:18.259 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:18.259 "is_configured": true, 00:19:18.259 "data_offset": 256, 00:19:18.259 "data_size": 7936 00:19:18.259 }, 00:19:18.259 { 00:19:18.259 "name": "BaseBdev2", 00:19:18.259 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:18.259 "is_configured": true, 00:19:18.259 "data_offset": 256, 00:19:18.259 "data_size": 7936 00:19:18.259 } 00:19:18.259 ] 00:19:18.259 }' 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.259 09:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.830 "name": "raid_bdev1", 00:19:18.830 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:18.830 "strip_size_kb": 0, 00:19:18.830 "state": "online", 00:19:18.830 "raid_level": "raid1", 00:19:18.830 "superblock": true, 00:19:18.830 "num_base_bdevs": 2, 00:19:18.830 "num_base_bdevs_discovered": 2, 00:19:18.830 "num_base_bdevs_operational": 2, 00:19:18.830 "base_bdevs_list": [ 00:19:18.830 { 00:19:18.830 "name": "spare", 00:19:18.830 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:18.830 "is_configured": true, 00:19:18.830 "data_offset": 256, 00:19:18.830 "data_size": 7936 00:19:18.830 }, 00:19:18.830 { 00:19:18.830 "name": "BaseBdev2", 00:19:18.830 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:18.830 "is_configured": true, 00:19:18.830 "data_offset": 256, 00:19:18.830 "data_size": 7936 00:19:18.830 } 00:19:18.830 ] 00:19:18.830 }' 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.830 [2024-10-11 09:53:03.352919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.830 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.831 "name": "raid_bdev1", 00:19:18.831 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:18.831 "strip_size_kb": 0, 00:19:18.831 "state": "online", 00:19:18.831 "raid_level": "raid1", 00:19:18.831 "superblock": true, 00:19:18.831 "num_base_bdevs": 2, 00:19:18.831 "num_base_bdevs_discovered": 1, 00:19:18.831 "num_base_bdevs_operational": 1, 00:19:18.831 "base_bdevs_list": [ 00:19:18.831 { 00:19:18.831 "name": null, 00:19:18.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.831 "is_configured": false, 00:19:18.831 "data_offset": 0, 00:19:18.831 "data_size": 7936 00:19:18.831 }, 00:19:18.831 { 00:19:18.831 "name": "BaseBdev2", 00:19:18.831 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:18.831 "is_configured": true, 00:19:18.831 "data_offset": 256, 00:19:18.831 "data_size": 7936 00:19:18.831 } 00:19:18.831 ] 00:19:18.831 }' 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.831 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.402 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.402 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.402 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.402 [2024-10-11 09:53:03.820287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.402 [2024-10-11 09:53:03.820497] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.402 [2024-10-11 09:53:03.820522] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.402 [2024-10-11 09:53:03.820558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.402 [2024-10-11 09:53:03.837598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:19.402 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.402 09:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:19.402 [2024-10-11 09:53:03.839463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.337 "name": "raid_bdev1", 00:19:20.337 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:20.337 "strip_size_kb": 0, 00:19:20.337 "state": "online", 00:19:20.337 "raid_level": "raid1", 00:19:20.337 "superblock": true, 00:19:20.337 "num_base_bdevs": 2, 00:19:20.337 "num_base_bdevs_discovered": 2, 00:19:20.337 "num_base_bdevs_operational": 2, 00:19:20.337 "process": { 00:19:20.337 "type": "rebuild", 00:19:20.337 "target": "spare", 00:19:20.337 "progress": { 00:19:20.337 "blocks": 2560, 00:19:20.337 "percent": 32 00:19:20.337 } 00:19:20.337 }, 00:19:20.337 "base_bdevs_list": [ 00:19:20.337 { 00:19:20.337 "name": "spare", 00:19:20.337 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:20.337 "is_configured": true, 00:19:20.337 "data_offset": 256, 00:19:20.337 "data_size": 7936 00:19:20.337 }, 00:19:20.337 { 00:19:20.337 "name": "BaseBdev2", 00:19:20.337 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:20.337 "is_configured": true, 00:19:20.337 "data_offset": 256, 00:19:20.337 "data_size": 7936 00:19:20.337 } 00:19:20.337 ] 00:19:20.337 }' 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.337 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.595 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.595 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.595 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.595 09:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.595 [2024-10-11 09:53:04.979223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.595 [2024-10-11 09:53:05.044784] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.595 [2024-10-11 09:53:05.044859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.595 [2024-10-11 09:53:05.044874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.595 [2024-10-11 09:53:05.044884] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.595 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.595 "name": "raid_bdev1", 00:19:20.595 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:20.595 "strip_size_kb": 0, 00:19:20.595 "state": "online", 00:19:20.595 "raid_level": "raid1", 00:19:20.595 "superblock": true, 00:19:20.595 "num_base_bdevs": 2, 00:19:20.595 "num_base_bdevs_discovered": 1, 00:19:20.595 "num_base_bdevs_operational": 1, 00:19:20.595 "base_bdevs_list": [ 00:19:20.595 { 00:19:20.596 "name": null, 00:19:20.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.596 "is_configured": false, 00:19:20.596 "data_offset": 0, 00:19:20.596 "data_size": 7936 00:19:20.596 }, 00:19:20.596 { 00:19:20.596 "name": "BaseBdev2", 00:19:20.596 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:20.596 "is_configured": true, 00:19:20.596 "data_offset": 256, 00:19:20.596 "data_size": 7936 00:19:20.596 } 00:19:20.596 ] 00:19:20.596 }' 00:19:20.596 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.596 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.162 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.162 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.162 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.162 [2024-10-11 09:53:05.524886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.162 [2024-10-11 09:53:05.524967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.162 [2024-10-11 09:53:05.524993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:21.162 [2024-10-11 09:53:05.525004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.162 [2024-10-11 09:53:05.525503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.162 [2024-10-11 09:53:05.525524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.162 [2024-10-11 09:53:05.525622] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.162 [2024-10-11 09:53:05.525638] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.162 [2024-10-11 09:53:05.525647] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.162 [2024-10-11 09:53:05.525671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.162 [2024-10-11 09:53:05.542514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:21.162 spare 00:19:21.162 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.162 09:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:21.162 [2024-10-11 09:53:05.544359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.098 "name": "raid_bdev1", 00:19:22.098 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:22.098 "strip_size_kb": 0, 00:19:22.098 "state": "online", 00:19:22.098 "raid_level": "raid1", 00:19:22.098 "superblock": true, 00:19:22.098 "num_base_bdevs": 2, 00:19:22.098 "num_base_bdevs_discovered": 2, 00:19:22.098 "num_base_bdevs_operational": 2, 00:19:22.098 "process": { 00:19:22.098 "type": "rebuild", 00:19:22.098 "target": "spare", 00:19:22.098 "progress": { 00:19:22.098 "blocks": 2560, 00:19:22.098 "percent": 32 00:19:22.098 } 00:19:22.098 }, 00:19:22.098 "base_bdevs_list": [ 00:19:22.098 { 00:19:22.098 "name": "spare", 00:19:22.098 "uuid": "e0e71b7b-1438-5975-a4bf-f657368ded78", 00:19:22.098 "is_configured": true, 00:19:22.098 "data_offset": 256, 00:19:22.098 "data_size": 7936 00:19:22.098 }, 00:19:22.098 { 00:19:22.098 "name": "BaseBdev2", 00:19:22.098 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:22.098 "is_configured": true, 00:19:22.098 "data_offset": 256, 00:19:22.098 "data_size": 7936 00:19:22.098 } 00:19:22.098 ] 00:19:22.098 }' 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.098 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.099 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.099 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.099 [2024-10-11 09:53:06.688498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.357 [2024-10-11 09:53:06.749485] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.357 [2024-10-11 09:53:06.749545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.357 [2024-10-11 09:53:06.749562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.358 [2024-10-11 09:53:06.749569] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.358 "name": "raid_bdev1", 00:19:22.358 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:22.358 "strip_size_kb": 0, 00:19:22.358 "state": "online", 00:19:22.358 "raid_level": "raid1", 00:19:22.358 "superblock": true, 00:19:22.358 "num_base_bdevs": 2, 00:19:22.358 "num_base_bdevs_discovered": 1, 00:19:22.358 "num_base_bdevs_operational": 1, 00:19:22.358 "base_bdevs_list": [ 00:19:22.358 { 00:19:22.358 "name": null, 00:19:22.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.358 "is_configured": false, 00:19:22.358 "data_offset": 0, 00:19:22.358 "data_size": 7936 00:19:22.358 }, 00:19:22.358 { 00:19:22.358 "name": "BaseBdev2", 00:19:22.358 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:22.358 "is_configured": true, 00:19:22.358 "data_offset": 256, 00:19:22.358 "data_size": 7936 00:19:22.358 } 00:19:22.358 ] 00:19:22.358 }' 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.358 09:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.616 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.616 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.617 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.875 "name": "raid_bdev1", 00:19:22.875 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:22.875 "strip_size_kb": 0, 00:19:22.875 "state": "online", 00:19:22.875 "raid_level": "raid1", 00:19:22.875 "superblock": true, 00:19:22.875 "num_base_bdevs": 2, 00:19:22.875 "num_base_bdevs_discovered": 1, 00:19:22.875 "num_base_bdevs_operational": 1, 00:19:22.875 "base_bdevs_list": [ 00:19:22.875 { 00:19:22.875 "name": null, 00:19:22.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.875 "is_configured": false, 00:19:22.875 "data_offset": 0, 00:19:22.875 "data_size": 7936 00:19:22.875 }, 00:19:22.875 { 00:19:22.875 "name": "BaseBdev2", 00:19:22.875 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:22.875 "is_configured": true, 00:19:22.875 "data_offset": 256, 00:19:22.875 "data_size": 7936 00:19:22.875 } 00:19:22.875 ] 00:19:22.875 }' 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.875 [2024-10-11 09:53:07.385693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.875 [2024-10-11 09:53:07.385795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.875 [2024-10-11 09:53:07.385844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:22.875 [2024-10-11 09:53:07.385858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.875 [2024-10-11 09:53:07.386436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.875 [2024-10-11 09:53:07.386465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.875 [2024-10-11 09:53:07.386587] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.875 [2024-10-11 09:53:07.386607] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.875 [2024-10-11 09:53:07.386621] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:22.875 [2024-10-11 09:53:07.386637] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:22.875 BaseBdev1 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.875 09:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.810 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.068 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.068 "name": "raid_bdev1", 00:19:24.068 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:24.068 "strip_size_kb": 0, 00:19:24.068 "state": "online", 00:19:24.068 "raid_level": "raid1", 00:19:24.068 "superblock": true, 00:19:24.068 "num_base_bdevs": 2, 00:19:24.068 "num_base_bdevs_discovered": 1, 00:19:24.068 "num_base_bdevs_operational": 1, 00:19:24.068 "base_bdevs_list": [ 00:19:24.068 { 00:19:24.068 "name": null, 00:19:24.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.068 "is_configured": false, 00:19:24.068 "data_offset": 0, 00:19:24.068 "data_size": 7936 00:19:24.068 }, 00:19:24.068 { 00:19:24.069 "name": "BaseBdev2", 00:19:24.069 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:24.069 "is_configured": true, 00:19:24.069 "data_offset": 256, 00:19:24.069 "data_size": 7936 00:19:24.069 } 00:19:24.069 ] 00:19:24.069 }' 00:19:24.069 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.069 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.328 "name": "raid_bdev1", 00:19:24.328 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:24.328 "strip_size_kb": 0, 00:19:24.328 "state": "online", 00:19:24.328 "raid_level": "raid1", 00:19:24.328 "superblock": true, 00:19:24.328 "num_base_bdevs": 2, 00:19:24.328 "num_base_bdevs_discovered": 1, 00:19:24.328 "num_base_bdevs_operational": 1, 00:19:24.328 "base_bdevs_list": [ 00:19:24.328 { 00:19:24.328 "name": null, 00:19:24.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.328 "is_configured": false, 00:19:24.328 "data_offset": 0, 00:19:24.328 "data_size": 7936 00:19:24.328 }, 00:19:24.328 { 00:19:24.328 "name": "BaseBdev2", 00:19:24.328 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:24.328 "is_configured": true, 00:19:24.328 "data_offset": 256, 00:19:24.328 "data_size": 7936 00:19:24.328 } 00:19:24.328 ] 00:19:24.328 }' 00:19:24.328 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.588 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.588 09:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.588 [2024-10-11 09:53:09.022970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.588 [2024-10-11 09:53:09.023172] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.588 [2024-10-11 09:53:09.023200] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.588 request: 00:19:24.588 { 00:19:24.588 "base_bdev": "BaseBdev1", 00:19:24.588 "raid_bdev": "raid_bdev1", 00:19:24.588 "method": "bdev_raid_add_base_bdev", 00:19:24.588 "req_id": 1 00:19:24.588 } 00:19:24.588 Got JSON-RPC error response 00:19:24.588 response: 00:19:24.588 { 00:19:24.588 "code": -22, 00:19:24.588 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:24.588 } 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.588 09:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.523 "name": "raid_bdev1", 00:19:25.523 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:25.523 "strip_size_kb": 0, 00:19:25.523 "state": "online", 00:19:25.523 "raid_level": "raid1", 00:19:25.523 "superblock": true, 00:19:25.523 "num_base_bdevs": 2, 00:19:25.523 "num_base_bdevs_discovered": 1, 00:19:25.523 "num_base_bdevs_operational": 1, 00:19:25.523 "base_bdevs_list": [ 00:19:25.523 { 00:19:25.523 "name": null, 00:19:25.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.523 "is_configured": false, 00:19:25.523 "data_offset": 0, 00:19:25.523 "data_size": 7936 00:19:25.523 }, 00:19:25.523 { 00:19:25.523 "name": "BaseBdev2", 00:19:25.523 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:25.523 "is_configured": true, 00:19:25.523 "data_offset": 256, 00:19:25.523 "data_size": 7936 00:19:25.523 } 00:19:25.523 ] 00:19:25.523 }' 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.523 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.782 "name": "raid_bdev1", 00:19:25.782 "uuid": "b0df8032-0ef7-4085-939d-209bb1248f38", 00:19:25.782 "strip_size_kb": 0, 00:19:25.782 "state": "online", 00:19:25.782 "raid_level": "raid1", 00:19:25.782 "superblock": true, 00:19:25.782 "num_base_bdevs": 2, 00:19:25.782 "num_base_bdevs_discovered": 1, 00:19:25.782 "num_base_bdevs_operational": 1, 00:19:25.782 "base_bdevs_list": [ 00:19:25.782 { 00:19:25.782 "name": null, 00:19:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.782 "is_configured": false, 00:19:25.782 "data_offset": 0, 00:19:25.782 "data_size": 7936 00:19:25.782 }, 00:19:25.782 { 00:19:25.782 "name": "BaseBdev2", 00:19:25.782 "uuid": "75f4f064-452c-5575-836a-ed0f08260a95", 00:19:25.782 "is_configured": true, 00:19:25.782 "data_offset": 256, 00:19:25.782 "data_size": 7936 00:19:25.782 } 00:19:25.782 ] 00:19:25.782 }' 00:19:25.782 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87079 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 87079 ']' 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 87079 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87079 00:19:26.041 killing process with pid 87079 00:19:26.041 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.041 00:19:26.041 Latency(us) 00:19:26.041 [2024-10-11T09:53:10.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.041 [2024-10-11T09:53:10.673Z] =================================================================================================================== 00:19:26.041 [2024-10-11T09:53:10.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87079' 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 87079 00:19:26.041 [2024-10-11 09:53:10.504113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.041 [2024-10-11 09:53:10.504261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.041 09:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 87079 00:19:26.041 [2024-10-11 09:53:10.504311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.041 [2024-10-11 09:53:10.504322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:26.298 [2024-10-11 09:53:10.787587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.251 09:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.251 00:19:27.251 real 0m19.689s 00:19:27.251 user 0m25.449s 00:19:27.251 sys 0m2.723s 00:19:27.251 09:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.251 09:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.251 ************************************ 00:19:27.251 END TEST raid_rebuild_test_sb_4k 00:19:27.251 ************************************ 00:19:27.509 09:53:11 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:27.509 09:53:11 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:27.509 09:53:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:27.509 09:53:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.509 09:53:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.509 ************************************ 00:19:27.509 START TEST raid_state_function_test_sb_md_separate 00:19:27.509 ************************************ 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87764 00:19:27.509 Process raid pid: 87764 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87764' 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87764 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87764 ']' 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.509 09:53:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.509 [2024-10-11 09:53:12.032650] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:27.509 [2024-10-11 09:53:12.032800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.768 [2024-10-11 09:53:12.201249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.768 [2024-10-11 09:53:12.326440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.027 [2024-10-11 09:53:12.543224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.027 [2024-10-11 09:53:12.543267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.285 [2024-10-11 09:53:12.842538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.285 [2024-10-11 09:53:12.842591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.285 [2024-10-11 09:53:12.842601] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.285 [2024-10-11 09:53:12.842611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.285 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.285 "name": "Existed_Raid", 00:19:28.285 "uuid": "e22e240a-bdfe-4fe8-87c7-4dedb52928f3", 00:19:28.285 "strip_size_kb": 0, 00:19:28.285 "state": "configuring", 00:19:28.285 "raid_level": "raid1", 00:19:28.285 "superblock": true, 00:19:28.285 "num_base_bdevs": 2, 00:19:28.285 "num_base_bdevs_discovered": 0, 00:19:28.286 "num_base_bdevs_operational": 2, 00:19:28.286 "base_bdevs_list": [ 00:19:28.286 { 00:19:28.286 "name": "BaseBdev1", 00:19:28.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.286 "is_configured": false, 00:19:28.286 "data_offset": 0, 00:19:28.286 "data_size": 0 00:19:28.286 }, 00:19:28.286 { 00:19:28.286 "name": "BaseBdev2", 00:19:28.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.286 "is_configured": false, 00:19:28.286 "data_offset": 0, 00:19:28.286 "data_size": 0 00:19:28.286 } 00:19:28.286 ] 00:19:28.286 }' 00:19:28.286 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.286 09:53:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 [2024-10-11 09:53:13.261766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.852 [2024-10-11 09:53:13.261823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 [2024-10-11 09:53:13.273768] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.852 [2024-10-11 09:53:13.273816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.852 [2024-10-11 09:53:13.273826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.852 [2024-10-11 09:53:13.273838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 [2024-10-11 09:53:13.326260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.852 BaseBdev1 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.852 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.852 [ 00:19:28.852 { 00:19:28.852 "name": "BaseBdev1", 00:19:28.852 "aliases": [ 00:19:28.852 "2e9c39a9-6b7a-428d-8f3f-2880c8fd3e17" 00:19:28.852 ], 00:19:28.852 "product_name": "Malloc disk", 00:19:28.852 "block_size": 4096, 00:19:28.852 "num_blocks": 8192, 00:19:28.852 "uuid": "2e9c39a9-6b7a-428d-8f3f-2880c8fd3e17", 00:19:28.852 "md_size": 32, 00:19:28.852 "md_interleave": false, 00:19:28.852 "dif_type": 0, 00:19:28.852 "assigned_rate_limits": { 00:19:28.852 "rw_ios_per_sec": 0, 00:19:28.852 "rw_mbytes_per_sec": 0, 00:19:28.852 "r_mbytes_per_sec": 0, 00:19:28.852 "w_mbytes_per_sec": 0 00:19:28.852 }, 00:19:28.852 "claimed": true, 00:19:28.852 "claim_type": "exclusive_write", 00:19:28.852 "zoned": false, 00:19:28.852 "supported_io_types": { 00:19:28.852 "read": true, 00:19:28.852 "write": true, 00:19:28.852 "unmap": true, 00:19:28.852 "flush": true, 00:19:28.852 "reset": true, 00:19:28.852 "nvme_admin": false, 00:19:28.852 "nvme_io": false, 00:19:28.852 "nvme_io_md": false, 00:19:28.852 "write_zeroes": true, 00:19:28.852 "zcopy": true, 00:19:28.852 "get_zone_info": false, 00:19:28.852 "zone_management": false, 00:19:28.852 "zone_append": false, 00:19:28.852 "compare": false, 00:19:28.852 "compare_and_write": false, 00:19:28.852 "abort": true, 00:19:28.852 "seek_hole": false, 00:19:28.852 "seek_data": false, 00:19:28.852 "copy": true, 00:19:28.852 "nvme_iov_md": false 00:19:28.852 }, 00:19:28.852 "memory_domains": [ 00:19:28.852 { 00:19:28.852 "dma_device_id": "system", 00:19:28.852 "dma_device_type": 1 00:19:28.852 }, 00:19:28.852 { 00:19:28.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.852 "dma_device_type": 2 00:19:28.852 } 00:19:28.853 ], 00:19:28.853 "driver_specific": {} 00:19:28.853 } 00:19:28.853 ] 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.853 "name": "Existed_Raid", 00:19:28.853 "uuid": "c1192f13-7dea-47d3-9816-99b6c2cd2094", 00:19:28.853 "strip_size_kb": 0, 00:19:28.853 "state": "configuring", 00:19:28.853 "raid_level": "raid1", 00:19:28.853 "superblock": true, 00:19:28.853 "num_base_bdevs": 2, 00:19:28.853 "num_base_bdevs_discovered": 1, 00:19:28.853 "num_base_bdevs_operational": 2, 00:19:28.853 "base_bdevs_list": [ 00:19:28.853 { 00:19:28.853 "name": "BaseBdev1", 00:19:28.853 "uuid": "2e9c39a9-6b7a-428d-8f3f-2880c8fd3e17", 00:19:28.853 "is_configured": true, 00:19:28.853 "data_offset": 256, 00:19:28.853 "data_size": 7936 00:19:28.853 }, 00:19:28.853 { 00:19:28.853 "name": "BaseBdev2", 00:19:28.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.853 "is_configured": false, 00:19:28.853 "data_offset": 0, 00:19:28.853 "data_size": 0 00:19:28.853 } 00:19:28.853 ] 00:19:28.853 }' 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.853 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.420 [2024-10-11 09:53:13.841458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.420 [2024-10-11 09:53:13.841523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.420 [2024-10-11 09:53:13.853486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.420 [2024-10-11 09:53:13.855279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.420 [2024-10-11 09:53:13.855323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.420 "name": "Existed_Raid", 00:19:29.420 "uuid": "428da900-f6e9-4dfb-b3e2-bdac8d4a455b", 00:19:29.420 "strip_size_kb": 0, 00:19:29.420 "state": "configuring", 00:19:29.420 "raid_level": "raid1", 00:19:29.420 "superblock": true, 00:19:29.420 "num_base_bdevs": 2, 00:19:29.420 "num_base_bdevs_discovered": 1, 00:19:29.420 "num_base_bdevs_operational": 2, 00:19:29.420 "base_bdevs_list": [ 00:19:29.420 { 00:19:29.420 "name": "BaseBdev1", 00:19:29.420 "uuid": "2e9c39a9-6b7a-428d-8f3f-2880c8fd3e17", 00:19:29.420 "is_configured": true, 00:19:29.420 "data_offset": 256, 00:19:29.420 "data_size": 7936 00:19:29.420 }, 00:19:29.420 { 00:19:29.420 "name": "BaseBdev2", 00:19:29.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.420 "is_configured": false, 00:19:29.420 "data_offset": 0, 00:19:29.420 "data_size": 0 00:19:29.420 } 00:19:29.420 ] 00:19:29.420 }' 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.420 09:53:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.678 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:29.678 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.678 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.937 [2024-10-11 09:53:14.331044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.937 [2024-10-11 09:53:14.331278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:29.937 [2024-10-11 09:53:14.331295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:29.937 [2024-10-11 09:53:14.331375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:29.937 [2024-10-11 09:53:14.331502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:29.937 [2024-10-11 09:53:14.331525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:29.937 [2024-10-11 09:53:14.331623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.937 BaseBdev2 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.937 [ 00:19:29.937 { 00:19:29.937 "name": "BaseBdev2", 00:19:29.937 "aliases": [ 00:19:29.937 "83015e91-c455-41f1-a5c0-64c934d6bc33" 00:19:29.937 ], 00:19:29.937 "product_name": "Malloc disk", 00:19:29.937 "block_size": 4096, 00:19:29.937 "num_blocks": 8192, 00:19:29.937 "uuid": "83015e91-c455-41f1-a5c0-64c934d6bc33", 00:19:29.937 "md_size": 32, 00:19:29.937 "md_interleave": false, 00:19:29.937 "dif_type": 0, 00:19:29.937 "assigned_rate_limits": { 00:19:29.937 "rw_ios_per_sec": 0, 00:19:29.937 "rw_mbytes_per_sec": 0, 00:19:29.937 "r_mbytes_per_sec": 0, 00:19:29.937 "w_mbytes_per_sec": 0 00:19:29.937 }, 00:19:29.937 "claimed": true, 00:19:29.937 "claim_type": "exclusive_write", 00:19:29.937 "zoned": false, 00:19:29.937 "supported_io_types": { 00:19:29.937 "read": true, 00:19:29.937 "write": true, 00:19:29.937 "unmap": true, 00:19:29.937 "flush": true, 00:19:29.937 "reset": true, 00:19:29.937 "nvme_admin": false, 00:19:29.937 "nvme_io": false, 00:19:29.937 "nvme_io_md": false, 00:19:29.937 "write_zeroes": true, 00:19:29.937 "zcopy": true, 00:19:29.937 "get_zone_info": false, 00:19:29.937 "zone_management": false, 00:19:29.937 "zone_append": false, 00:19:29.937 "compare": false, 00:19:29.937 "compare_and_write": false, 00:19:29.937 "abort": true, 00:19:29.937 "seek_hole": false, 00:19:29.937 "seek_data": false, 00:19:29.937 "copy": true, 00:19:29.937 "nvme_iov_md": false 00:19:29.937 }, 00:19:29.937 "memory_domains": [ 00:19:29.937 { 00:19:29.937 "dma_device_id": "system", 00:19:29.937 "dma_device_type": 1 00:19:29.937 }, 00:19:29.937 { 00:19:29.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.937 "dma_device_type": 2 00:19:29.937 } 00:19:29.937 ], 00:19:29.937 "driver_specific": {} 00:19:29.937 } 00:19:29.937 ] 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.937 "name": "Existed_Raid", 00:19:29.937 "uuid": "428da900-f6e9-4dfb-b3e2-bdac8d4a455b", 00:19:29.937 "strip_size_kb": 0, 00:19:29.937 "state": "online", 00:19:29.937 "raid_level": "raid1", 00:19:29.937 "superblock": true, 00:19:29.937 "num_base_bdevs": 2, 00:19:29.937 "num_base_bdevs_discovered": 2, 00:19:29.937 "num_base_bdevs_operational": 2, 00:19:29.937 "base_bdevs_list": [ 00:19:29.937 { 00:19:29.937 "name": "BaseBdev1", 00:19:29.937 "uuid": "2e9c39a9-6b7a-428d-8f3f-2880c8fd3e17", 00:19:29.937 "is_configured": true, 00:19:29.937 "data_offset": 256, 00:19:29.937 "data_size": 7936 00:19:29.937 }, 00:19:29.937 { 00:19:29.937 "name": "BaseBdev2", 00:19:29.937 "uuid": "83015e91-c455-41f1-a5c0-64c934d6bc33", 00:19:29.937 "is_configured": true, 00:19:29.937 "data_offset": 256, 00:19:29.937 "data_size": 7936 00:19:29.937 } 00:19:29.937 ] 00:19:29.937 }' 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.937 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.196 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:30.196 [2024-10-11 09:53:14.818586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.454 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.454 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.454 "name": "Existed_Raid", 00:19:30.454 "aliases": [ 00:19:30.454 "428da900-f6e9-4dfb-b3e2-bdac8d4a455b" 00:19:30.454 ], 00:19:30.454 "product_name": "Raid Volume", 00:19:30.454 "block_size": 4096, 00:19:30.454 "num_blocks": 7936, 00:19:30.454 "uuid": "428da900-f6e9-4dfb-b3e2-bdac8d4a455b", 00:19:30.454 "md_size": 32, 00:19:30.454 "md_interleave": false, 00:19:30.454 "dif_type": 0, 00:19:30.454 "assigned_rate_limits": { 00:19:30.454 "rw_ios_per_sec": 0, 00:19:30.454 "rw_mbytes_per_sec": 0, 00:19:30.454 "r_mbytes_per_sec": 0, 00:19:30.454 "w_mbytes_per_sec": 0 00:19:30.454 }, 00:19:30.454 "claimed": false, 00:19:30.454 "zoned": false, 00:19:30.454 "supported_io_types": { 00:19:30.454 "read": true, 00:19:30.454 "write": true, 00:19:30.454 "unmap": false, 00:19:30.454 "flush": false, 00:19:30.454 "reset": true, 00:19:30.454 "nvme_admin": false, 00:19:30.454 "nvme_io": false, 00:19:30.454 "nvme_io_md": false, 00:19:30.454 "write_zeroes": true, 00:19:30.454 "zcopy": false, 00:19:30.454 "get_zone_info": false, 00:19:30.454 "zone_management": false, 00:19:30.454 "zone_append": false, 00:19:30.454 "compare": false, 00:19:30.454 "compare_and_write": false, 00:19:30.454 "abort": false, 00:19:30.454 "seek_hole": false, 00:19:30.454 "seek_data": false, 00:19:30.454 "copy": false, 00:19:30.454 "nvme_iov_md": false 00:19:30.454 }, 00:19:30.454 "memory_domains": [ 00:19:30.454 { 00:19:30.454 "dma_device_id": "system", 00:19:30.454 "dma_device_type": 1 00:19:30.454 }, 00:19:30.454 { 00:19:30.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.454 "dma_device_type": 2 00:19:30.454 }, 00:19:30.454 { 00:19:30.454 "dma_device_id": "system", 00:19:30.454 "dma_device_type": 1 00:19:30.454 }, 00:19:30.454 { 00:19:30.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.454 "dma_device_type": 2 00:19:30.454 } 00:19:30.454 ], 00:19:30.454 "driver_specific": { 00:19:30.454 "raid": { 00:19:30.454 "uuid": "428da900-f6e9-4dfb-b3e2-bdac8d4a455b", 00:19:30.454 "strip_size_kb": 0, 00:19:30.454 "state": "online", 00:19:30.454 "raid_level": "raid1", 00:19:30.454 "superblock": true, 00:19:30.454 "num_base_bdevs": 2, 00:19:30.454 "num_base_bdevs_discovered": 2, 00:19:30.454 "num_base_bdevs_operational": 2, 00:19:30.454 "base_bdevs_list": [ 00:19:30.454 { 00:19:30.454 "name": "BaseBdev1", 00:19:30.454 "uuid": "2e9c39a9-6b7a-428d-8f3f-2880c8fd3e17", 00:19:30.454 "is_configured": true, 00:19:30.454 "data_offset": 256, 00:19:30.454 "data_size": 7936 00:19:30.454 }, 00:19:30.454 { 00:19:30.455 "name": "BaseBdev2", 00:19:30.455 "uuid": "83015e91-c455-41f1-a5c0-64c934d6bc33", 00:19:30.455 "is_configured": true, 00:19:30.455 "data_offset": 256, 00:19:30.455 "data_size": 7936 00:19:30.455 } 00:19:30.455 ] 00:19:30.455 } 00:19:30.455 } 00:19:30.455 }' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:30.455 BaseBdev2' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.455 09:53:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.455 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.455 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:30.455 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:30.455 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:30.455 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.455 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.455 [2024-10-11 09:53:15.025973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.713 "name": "Existed_Raid", 00:19:30.713 "uuid": "428da900-f6e9-4dfb-b3e2-bdac8d4a455b", 00:19:30.713 "strip_size_kb": 0, 00:19:30.713 "state": "online", 00:19:30.713 "raid_level": "raid1", 00:19:30.713 "superblock": true, 00:19:30.713 "num_base_bdevs": 2, 00:19:30.713 "num_base_bdevs_discovered": 1, 00:19:30.713 "num_base_bdevs_operational": 1, 00:19:30.713 "base_bdevs_list": [ 00:19:30.713 { 00:19:30.713 "name": null, 00:19:30.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.713 "is_configured": false, 00:19:30.713 "data_offset": 0, 00:19:30.713 "data_size": 7936 00:19:30.713 }, 00:19:30.713 { 00:19:30.713 "name": "BaseBdev2", 00:19:30.713 "uuid": "83015e91-c455-41f1-a5c0-64c934d6bc33", 00:19:30.713 "is_configured": true, 00:19:30.713 "data_offset": 256, 00:19:30.713 "data_size": 7936 00:19:30.713 } 00:19:30.713 ] 00:19:30.713 }' 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.713 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.281 [2024-10-11 09:53:15.665951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:31.281 [2024-10-11 09:53:15.666062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.281 [2024-10-11 09:53:15.763024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.281 [2024-10-11 09:53:15.763085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.281 [2024-10-11 09:53:15.763097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87764 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87764 ']' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87764 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87764 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.281 killing process with pid 87764 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87764' 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87764 00:19:31.281 [2024-10-11 09:53:15.845541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.281 09:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87764 00:19:31.281 [2024-10-11 09:53:15.861474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.656 09:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:32.656 00:19:32.656 real 0m5.012s 00:19:32.656 user 0m7.161s 00:19:32.656 sys 0m0.919s 00:19:32.656 09:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.656 09:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.656 ************************************ 00:19:32.656 END TEST raid_state_function_test_sb_md_separate 00:19:32.656 ************************************ 00:19:32.656 09:53:17 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:32.656 09:53:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:32.656 09:53:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.656 09:53:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.656 ************************************ 00:19:32.656 START TEST raid_superblock_test_md_separate 00:19:32.656 ************************************ 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:32.656 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88015 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88015 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88015 ']' 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.657 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.657 [2024-10-11 09:53:17.122629] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:32.657 [2024-10-11 09:53:17.122824] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88015 ] 00:19:32.915 [2024-10-11 09:53:17.296249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.916 [2024-10-11 09:53:17.419859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.174 [2024-10-11 09:53:17.644556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.174 [2024-10-11 09:53:17.644600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.433 09:53:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.433 malloc1 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.433 [2024-10-11 09:53:18.010942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:33.433 [2024-10-11 09:53:18.011014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.433 [2024-10-11 09:53:18.011036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:33.433 [2024-10-11 09:53:18.011046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.433 [2024-10-11 09:53:18.012941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.433 [2024-10-11 09:53:18.012978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:33.433 pt1 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.433 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.692 malloc2 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.692 [2024-10-11 09:53:18.077458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.692 [2024-10-11 09:53:18.077572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.692 [2024-10-11 09:53:18.077602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:33.692 [2024-10-11 09:53:18.077614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.692 [2024-10-11 09:53:18.079748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.692 [2024-10-11 09:53:18.079797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.692 pt2 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.692 [2024-10-11 09:53:18.089474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.692 [2024-10-11 09:53:18.091580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.692 [2024-10-11 09:53:18.091835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:33.692 [2024-10-11 09:53:18.091852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:33.692 [2024-10-11 09:53:18.091955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:33.692 [2024-10-11 09:53:18.092108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:33.692 [2024-10-11 09:53:18.092129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:33.692 [2024-10-11 09:53:18.092262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.692 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.692 "name": "raid_bdev1", 00:19:33.692 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:33.692 "strip_size_kb": 0, 00:19:33.693 "state": "online", 00:19:33.693 "raid_level": "raid1", 00:19:33.693 "superblock": true, 00:19:33.693 "num_base_bdevs": 2, 00:19:33.693 "num_base_bdevs_discovered": 2, 00:19:33.693 "num_base_bdevs_operational": 2, 00:19:33.693 "base_bdevs_list": [ 00:19:33.693 { 00:19:33.693 "name": "pt1", 00:19:33.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.693 "is_configured": true, 00:19:33.693 "data_offset": 256, 00:19:33.693 "data_size": 7936 00:19:33.693 }, 00:19:33.693 { 00:19:33.693 "name": "pt2", 00:19:33.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.693 "is_configured": true, 00:19:33.693 "data_offset": 256, 00:19:33.693 "data_size": 7936 00:19:33.693 } 00:19:33.693 ] 00:19:33.693 }' 00:19:33.693 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.693 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.951 [2024-10-11 09:53:18.553007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.951 "name": "raid_bdev1", 00:19:33.951 "aliases": [ 00:19:33.951 "a264594d-0b92-44e1-8c07-6af8ec3275f0" 00:19:33.951 ], 00:19:33.951 "product_name": "Raid Volume", 00:19:33.951 "block_size": 4096, 00:19:33.951 "num_blocks": 7936, 00:19:33.951 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:33.951 "md_size": 32, 00:19:33.951 "md_interleave": false, 00:19:33.951 "dif_type": 0, 00:19:33.951 "assigned_rate_limits": { 00:19:33.951 "rw_ios_per_sec": 0, 00:19:33.951 "rw_mbytes_per_sec": 0, 00:19:33.951 "r_mbytes_per_sec": 0, 00:19:33.951 "w_mbytes_per_sec": 0 00:19:33.951 }, 00:19:33.951 "claimed": false, 00:19:33.951 "zoned": false, 00:19:33.951 "supported_io_types": { 00:19:33.951 "read": true, 00:19:33.951 "write": true, 00:19:33.951 "unmap": false, 00:19:33.951 "flush": false, 00:19:33.951 "reset": true, 00:19:33.951 "nvme_admin": false, 00:19:33.951 "nvme_io": false, 00:19:33.951 "nvme_io_md": false, 00:19:33.951 "write_zeroes": true, 00:19:33.951 "zcopy": false, 00:19:33.951 "get_zone_info": false, 00:19:33.951 "zone_management": false, 00:19:33.951 "zone_append": false, 00:19:33.951 "compare": false, 00:19:33.951 "compare_and_write": false, 00:19:33.951 "abort": false, 00:19:33.951 "seek_hole": false, 00:19:33.951 "seek_data": false, 00:19:33.951 "copy": false, 00:19:33.951 "nvme_iov_md": false 00:19:33.951 }, 00:19:33.951 "memory_domains": [ 00:19:33.951 { 00:19:33.951 "dma_device_id": "system", 00:19:33.951 "dma_device_type": 1 00:19:33.951 }, 00:19:33.951 { 00:19:33.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.951 "dma_device_type": 2 00:19:33.951 }, 00:19:33.951 { 00:19:33.951 "dma_device_id": "system", 00:19:33.951 "dma_device_type": 1 00:19:33.951 }, 00:19:33.951 { 00:19:33.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.951 "dma_device_type": 2 00:19:33.951 } 00:19:33.951 ], 00:19:33.951 "driver_specific": { 00:19:33.951 "raid": { 00:19:33.951 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:33.951 "strip_size_kb": 0, 00:19:33.951 "state": "online", 00:19:33.951 "raid_level": "raid1", 00:19:33.951 "superblock": true, 00:19:33.951 "num_base_bdevs": 2, 00:19:33.951 "num_base_bdevs_discovered": 2, 00:19:33.951 "num_base_bdevs_operational": 2, 00:19:33.951 "base_bdevs_list": [ 00:19:33.951 { 00:19:33.951 "name": "pt1", 00:19:33.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.951 "is_configured": true, 00:19:33.951 "data_offset": 256, 00:19:33.951 "data_size": 7936 00:19:33.951 }, 00:19:33.951 { 00:19:33.951 "name": "pt2", 00:19:33.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.951 "is_configured": true, 00:19:33.951 "data_offset": 256, 00:19:33.951 "data_size": 7936 00:19:33.951 } 00:19:33.951 ] 00:19:33.951 } 00:19:33.951 } 00:19:33.951 }' 00:19:33.951 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:34.210 pt2' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.210 [2024-10-11 09:53:18.788511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a264594d-0b92-44e1-8c07-6af8ec3275f0 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a264594d-0b92-44e1-8c07-6af8ec3275f0 ']' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.210 [2024-10-11 09:53:18.824175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.210 [2024-10-11 09:53:18.824203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.210 [2024-10-11 09:53:18.824310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.210 [2024-10-11 09:53:18.824375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.210 [2024-10-11 09:53:18.824388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.210 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 [2024-10-11 09:53:18.960001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:34.470 [2024-10-11 09:53:18.961854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:34.470 [2024-10-11 09:53:18.961941] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:34.470 [2024-10-11 09:53:18.961991] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:34.470 [2024-10-11 09:53:18.962005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.470 [2024-10-11 09:53:18.962016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:34.470 request: 00:19:34.470 { 00:19:34.470 "name": "raid_bdev1", 00:19:34.470 "raid_level": "raid1", 00:19:34.470 "base_bdevs": [ 00:19:34.470 "malloc1", 00:19:34.470 "malloc2" 00:19:34.470 ], 00:19:34.470 "superblock": false, 00:19:34.470 "method": "bdev_raid_create", 00:19:34.470 "req_id": 1 00:19:34.470 } 00:19:34.470 Got JSON-RPC error response 00:19:34.470 response: 00:19:34.470 { 00:19:34.470 "code": -17, 00:19:34.470 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:34.470 } 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:34.470 09:53:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 [2024-10-11 09:53:19.015845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.470 [2024-10-11 09:53:19.015896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.470 [2024-10-11 09:53:19.015910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:34.470 [2024-10-11 09:53:19.015921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.470 [2024-10-11 09:53:19.017822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.470 [2024-10-11 09:53:19.017862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.470 [2024-10-11 09:53:19.017905] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:34.470 [2024-10-11 09:53:19.017956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.470 pt1 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.470 "name": "raid_bdev1", 00:19:34.470 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:34.470 "strip_size_kb": 0, 00:19:34.470 "state": "configuring", 00:19:34.470 "raid_level": "raid1", 00:19:34.470 "superblock": true, 00:19:34.470 "num_base_bdevs": 2, 00:19:34.470 "num_base_bdevs_discovered": 1, 00:19:34.470 "num_base_bdevs_operational": 2, 00:19:34.470 "base_bdevs_list": [ 00:19:34.470 { 00:19:34.470 "name": "pt1", 00:19:34.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.470 "is_configured": true, 00:19:34.470 "data_offset": 256, 00:19:34.470 "data_size": 7936 00:19:34.470 }, 00:19:34.470 { 00:19:34.470 "name": null, 00:19:34.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.470 "is_configured": false, 00:19:34.470 "data_offset": 256, 00:19:34.470 "data_size": 7936 00:19:34.470 } 00:19:34.470 ] 00:19:34.470 }' 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.470 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.037 [2024-10-11 09:53:19.447149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.037 [2024-10-11 09:53:19.447223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.037 [2024-10-11 09:53:19.447245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:35.037 [2024-10-11 09:53:19.447256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.037 [2024-10-11 09:53:19.447491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.037 [2024-10-11 09:53:19.447514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.037 [2024-10-11 09:53:19.447582] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:35.037 [2024-10-11 09:53:19.447606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.037 [2024-10-11 09:53:19.447753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:35.037 [2024-10-11 09:53:19.447766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.037 [2024-10-11 09:53:19.447837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:35.037 [2024-10-11 09:53:19.447947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:35.037 [2024-10-11 09:53:19.447972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:35.037 [2024-10-11 09:53:19.448077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.037 pt2 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.037 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.037 "name": "raid_bdev1", 00:19:35.037 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:35.037 "strip_size_kb": 0, 00:19:35.037 "state": "online", 00:19:35.037 "raid_level": "raid1", 00:19:35.037 "superblock": true, 00:19:35.037 "num_base_bdevs": 2, 00:19:35.037 "num_base_bdevs_discovered": 2, 00:19:35.037 "num_base_bdevs_operational": 2, 00:19:35.037 "base_bdevs_list": [ 00:19:35.037 { 00:19:35.037 "name": "pt1", 00:19:35.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.037 "is_configured": true, 00:19:35.037 "data_offset": 256, 00:19:35.037 "data_size": 7936 00:19:35.037 }, 00:19:35.037 { 00:19:35.037 "name": "pt2", 00:19:35.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.037 "is_configured": true, 00:19:35.037 "data_offset": 256, 00:19:35.037 "data_size": 7936 00:19:35.037 } 00:19:35.037 ] 00:19:35.037 }' 00:19:35.038 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.038 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.302 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.302 [2024-10-11 09:53:19.914613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.572 09:53:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.572 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.572 "name": "raid_bdev1", 00:19:35.572 "aliases": [ 00:19:35.572 "a264594d-0b92-44e1-8c07-6af8ec3275f0" 00:19:35.572 ], 00:19:35.572 "product_name": "Raid Volume", 00:19:35.572 "block_size": 4096, 00:19:35.572 "num_blocks": 7936, 00:19:35.572 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:35.572 "md_size": 32, 00:19:35.572 "md_interleave": false, 00:19:35.572 "dif_type": 0, 00:19:35.572 "assigned_rate_limits": { 00:19:35.572 "rw_ios_per_sec": 0, 00:19:35.572 "rw_mbytes_per_sec": 0, 00:19:35.572 "r_mbytes_per_sec": 0, 00:19:35.572 "w_mbytes_per_sec": 0 00:19:35.572 }, 00:19:35.572 "claimed": false, 00:19:35.572 "zoned": false, 00:19:35.572 "supported_io_types": { 00:19:35.572 "read": true, 00:19:35.572 "write": true, 00:19:35.572 "unmap": false, 00:19:35.572 "flush": false, 00:19:35.572 "reset": true, 00:19:35.572 "nvme_admin": false, 00:19:35.572 "nvme_io": false, 00:19:35.572 "nvme_io_md": false, 00:19:35.572 "write_zeroes": true, 00:19:35.572 "zcopy": false, 00:19:35.572 "get_zone_info": false, 00:19:35.572 "zone_management": false, 00:19:35.572 "zone_append": false, 00:19:35.572 "compare": false, 00:19:35.572 "compare_and_write": false, 00:19:35.572 "abort": false, 00:19:35.572 "seek_hole": false, 00:19:35.572 "seek_data": false, 00:19:35.572 "copy": false, 00:19:35.572 "nvme_iov_md": false 00:19:35.572 }, 00:19:35.572 "memory_domains": [ 00:19:35.572 { 00:19:35.572 "dma_device_id": "system", 00:19:35.572 "dma_device_type": 1 00:19:35.572 }, 00:19:35.572 { 00:19:35.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.572 "dma_device_type": 2 00:19:35.572 }, 00:19:35.572 { 00:19:35.572 "dma_device_id": "system", 00:19:35.572 "dma_device_type": 1 00:19:35.572 }, 00:19:35.572 { 00:19:35.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.572 "dma_device_type": 2 00:19:35.572 } 00:19:35.572 ], 00:19:35.572 "driver_specific": { 00:19:35.572 "raid": { 00:19:35.572 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:35.572 "strip_size_kb": 0, 00:19:35.572 "state": "online", 00:19:35.572 "raid_level": "raid1", 00:19:35.572 "superblock": true, 00:19:35.572 "num_base_bdevs": 2, 00:19:35.572 "num_base_bdevs_discovered": 2, 00:19:35.572 "num_base_bdevs_operational": 2, 00:19:35.572 "base_bdevs_list": [ 00:19:35.572 { 00:19:35.572 "name": "pt1", 00:19:35.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.572 "is_configured": true, 00:19:35.572 "data_offset": 256, 00:19:35.572 "data_size": 7936 00:19:35.572 }, 00:19:35.572 { 00:19:35.572 "name": "pt2", 00:19:35.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.572 "is_configured": true, 00:19:35.572 "data_offset": 256, 00:19:35.572 "data_size": 7936 00:19:35.572 } 00:19:35.572 ] 00:19:35.572 } 00:19:35.572 } 00:19:35.572 }' 00:19:35.572 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.572 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:35.572 pt2' 00:19:35.572 09:53:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.572 [2024-10-11 09:53:20.090273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a264594d-0b92-44e1-8c07-6af8ec3275f0 '!=' a264594d-0b92-44e1-8c07-6af8ec3275f0 ']' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.572 [2024-10-11 09:53:20.122024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.572 "name": "raid_bdev1", 00:19:35.572 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:35.572 "strip_size_kb": 0, 00:19:35.572 "state": "online", 00:19:35.572 "raid_level": "raid1", 00:19:35.572 "superblock": true, 00:19:35.572 "num_base_bdevs": 2, 00:19:35.572 "num_base_bdevs_discovered": 1, 00:19:35.572 "num_base_bdevs_operational": 1, 00:19:35.572 "base_bdevs_list": [ 00:19:35.572 { 00:19:35.572 "name": null, 00:19:35.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.572 "is_configured": false, 00:19:35.572 "data_offset": 0, 00:19:35.572 "data_size": 7936 00:19:35.572 }, 00:19:35.572 { 00:19:35.572 "name": "pt2", 00:19:35.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.572 "is_configured": true, 00:19:35.572 "data_offset": 256, 00:19:35.572 "data_size": 7936 00:19:35.572 } 00:19:35.572 ] 00:19:35.572 }' 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.572 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.139 [2024-10-11 09:53:20.577257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.139 [2024-10-11 09:53:20.577289] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.139 [2024-10-11 09:53:20.577394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.139 [2024-10-11 09:53:20.577450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.139 [2024-10-11 09:53:20.577466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.139 [2024-10-11 09:53:20.649118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.139 [2024-10-11 09:53:20.649190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.139 [2024-10-11 09:53:20.649207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:36.139 [2024-10-11 09:53:20.649217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.139 [2024-10-11 09:53:20.651211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.139 [2024-10-11 09:53:20.651253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.139 [2024-10-11 09:53:20.651302] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.139 [2024-10-11 09:53:20.651349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.139 [2024-10-11 09:53:20.651448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:36.139 [2024-10-11 09:53:20.651460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.139 [2024-10-11 09:53:20.651533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:36.139 [2024-10-11 09:53:20.651656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:36.139 [2024-10-11 09:53:20.651663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:36.139 [2024-10-11 09:53:20.651784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.139 pt2 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.139 "name": "raid_bdev1", 00:19:36.139 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:36.139 "strip_size_kb": 0, 00:19:36.139 "state": "online", 00:19:36.139 "raid_level": "raid1", 00:19:36.139 "superblock": true, 00:19:36.139 "num_base_bdevs": 2, 00:19:36.139 "num_base_bdevs_discovered": 1, 00:19:36.139 "num_base_bdevs_operational": 1, 00:19:36.139 "base_bdevs_list": [ 00:19:36.139 { 00:19:36.139 "name": null, 00:19:36.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.139 "is_configured": false, 00:19:36.139 "data_offset": 256, 00:19:36.139 "data_size": 7936 00:19:36.139 }, 00:19:36.139 { 00:19:36.139 "name": "pt2", 00:19:36.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.139 "is_configured": true, 00:19:36.139 "data_offset": 256, 00:19:36.139 "data_size": 7936 00:19:36.139 } 00:19:36.139 ] 00:19:36.139 }' 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.139 09:53:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.706 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.706 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 [2024-10-11 09:53:21.072419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.707 [2024-10-11 09:53:21.072451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.707 [2024-10-11 09:53:21.072521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.707 [2024-10-11 09:53:21.072572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.707 [2024-10-11 09:53:21.072581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 [2024-10-11 09:53:21.124335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.707 [2024-10-11 09:53:21.124395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.707 [2024-10-11 09:53:21.124412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:36.707 [2024-10-11 09:53:21.124421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.707 [2024-10-11 09:53:21.126363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.707 [2024-10-11 09:53:21.126400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.707 [2024-10-11 09:53:21.126452] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:36.707 [2024-10-11 09:53:21.126502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.707 [2024-10-11 09:53:21.126639] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:36.707 [2024-10-11 09:53:21.126655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.707 [2024-10-11 09:53:21.126672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:36.707 [2024-10-11 09:53:21.126749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.707 [2024-10-11 09:53:21.126812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:36.707 [2024-10-11 09:53:21.126820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.707 [2024-10-11 09:53:21.126890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:36.707 [2024-10-11 09:53:21.127002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:36.707 [2024-10-11 09:53:21.127028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:36.707 [2024-10-11 09:53:21.127131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.707 pt1 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.707 "name": "raid_bdev1", 00:19:36.707 "uuid": "a264594d-0b92-44e1-8c07-6af8ec3275f0", 00:19:36.707 "strip_size_kb": 0, 00:19:36.707 "state": "online", 00:19:36.707 "raid_level": "raid1", 00:19:36.707 "superblock": true, 00:19:36.707 "num_base_bdevs": 2, 00:19:36.707 "num_base_bdevs_discovered": 1, 00:19:36.707 "num_base_bdevs_operational": 1, 00:19:36.707 "base_bdevs_list": [ 00:19:36.707 { 00:19:36.707 "name": null, 00:19:36.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.707 "is_configured": false, 00:19:36.707 "data_offset": 256, 00:19:36.707 "data_size": 7936 00:19:36.707 }, 00:19:36.707 { 00:19:36.707 "name": "pt2", 00:19:36.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.707 "is_configured": true, 00:19:36.707 "data_offset": 256, 00:19:36.707 "data_size": 7936 00:19:36.707 } 00:19:36.707 ] 00:19:36.707 }' 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.707 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.965 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.224 [2024-10-11 09:53:21.603798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a264594d-0b92-44e1-8c07-6af8ec3275f0 '!=' a264594d-0b92-44e1-8c07-6af8ec3275f0 ']' 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88015 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88015 ']' 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 88015 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88015 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.224 killing process with pid 88015 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88015' 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 88015 00:19:37.224 [2024-10-11 09:53:21.660415] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.224 [2024-10-11 09:53:21.660520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.224 [2024-10-11 09:53:21.660574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.224 [2024-10-11 09:53:21.660588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:37.224 09:53:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 88015 00:19:37.482 [2024-10-11 09:53:21.870638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.417 09:53:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:38.417 00:19:38.417 real 0m5.926s 00:19:38.417 user 0m8.883s 00:19:38.417 sys 0m1.155s 00:19:38.417 09:53:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.417 09:53:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.417 ************************************ 00:19:38.417 END TEST raid_superblock_test_md_separate 00:19:38.417 ************************************ 00:19:38.417 09:53:22 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:38.417 09:53:22 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:38.417 09:53:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:38.417 09:53:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.417 09:53:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.417 ************************************ 00:19:38.417 START TEST raid_rebuild_test_sb_md_separate 00:19:38.417 ************************************ 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:38.417 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88339 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88339 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88339 ']' 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.418 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.676 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:38.676 Zero copy mechanism will not be used. 00:19:38.676 [2024-10-11 09:53:23.121673] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:38.676 [2024-10-11 09:53:23.121840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88339 ] 00:19:38.676 [2024-10-11 09:53:23.294051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.935 [2024-10-11 09:53:23.421831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.193 [2024-10-11 09:53:23.639406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.193 [2024-10-11 09:53:23.639479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.451 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.451 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:19:39.451 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:39.451 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:39.451 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.451 09:53:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.451 BaseBdev1_malloc 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.451 [2024-10-11 09:53:24.029488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.451 [2024-10-11 09:53:24.029546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.451 [2024-10-11 09:53:24.029569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:39.451 [2024-10-11 09:53:24.029588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.451 [2024-10-11 09:53:24.031521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.451 [2024-10-11 09:53:24.031561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.451 BaseBdev1 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.451 BaseBdev2_malloc 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.451 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.710 [2024-10-11 09:53:24.087980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:39.710 [2024-10-11 09:53:24.088046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.710 [2024-10-11 09:53:24.088067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:39.710 [2024-10-11 09:53:24.088078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.710 [2024-10-11 09:53:24.089967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.710 [2024-10-11 09:53:24.090006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:39.710 BaseBdev2 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.710 spare_malloc 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.710 spare_delay 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.710 [2024-10-11 09:53:24.169488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:39.710 [2024-10-11 09:53:24.169578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.710 [2024-10-11 09:53:24.169603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:39.710 [2024-10-11 09:53:24.169615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.710 [2024-10-11 09:53:24.171489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.710 [2024-10-11 09:53:24.171532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:39.710 spare 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.710 [2024-10-11 09:53:24.181506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.710 [2024-10-11 09:53:24.183317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.710 [2024-10-11 09:53:24.183523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:39.710 [2024-10-11 09:53:24.183538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:39.710 [2024-10-11 09:53:24.183621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:39.710 [2024-10-11 09:53:24.183773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:39.710 [2024-10-11 09:53:24.183788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:39.710 [2024-10-11 09:53:24.183922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.710 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.711 "name": "raid_bdev1", 00:19:39.711 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:39.711 "strip_size_kb": 0, 00:19:39.711 "state": "online", 00:19:39.711 "raid_level": "raid1", 00:19:39.711 "superblock": true, 00:19:39.711 "num_base_bdevs": 2, 00:19:39.711 "num_base_bdevs_discovered": 2, 00:19:39.711 "num_base_bdevs_operational": 2, 00:19:39.711 "base_bdevs_list": [ 00:19:39.711 { 00:19:39.711 "name": "BaseBdev1", 00:19:39.711 "uuid": "63b7a57c-a677-535a-9c73-c8b001adbdd8", 00:19:39.711 "is_configured": true, 00:19:39.711 "data_offset": 256, 00:19:39.711 "data_size": 7936 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "name": "BaseBdev2", 00:19:39.711 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:39.711 "is_configured": true, 00:19:39.711 "data_offset": 256, 00:19:39.711 "data_size": 7936 00:19:39.711 } 00:19:39.711 ] 00:19:39.711 }' 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.711 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:40.278 [2024-10-11 09:53:24.660999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.278 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:40.536 [2024-10-11 09:53:24.936284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:40.536 /dev/nbd0 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.536 1+0 records in 00:19:40.536 1+0 records out 00:19:40.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230111 s, 17.8 MB/s 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.536 09:53:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:40.536 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:19:40.536 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.536 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.536 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:40.536 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:40.536 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:41.103 7936+0 records in 00:19:41.103 7936+0 records out 00:19:41.103 32505856 bytes (33 MB, 31 MiB) copied, 0.664147 s, 48.9 MB/s 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.103 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:41.361 [2024-10-11 09:53:25.873363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.362 [2024-10-11 09:53:25.913356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.362 "name": "raid_bdev1", 00:19:41.362 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:41.362 "strip_size_kb": 0, 00:19:41.362 "state": "online", 00:19:41.362 "raid_level": "raid1", 00:19:41.362 "superblock": true, 00:19:41.362 "num_base_bdevs": 2, 00:19:41.362 "num_base_bdevs_discovered": 1, 00:19:41.362 "num_base_bdevs_operational": 1, 00:19:41.362 "base_bdevs_list": [ 00:19:41.362 { 00:19:41.362 "name": null, 00:19:41.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.362 "is_configured": false, 00:19:41.362 "data_offset": 0, 00:19:41.362 "data_size": 7936 00:19:41.362 }, 00:19:41.362 { 00:19:41.362 "name": "BaseBdev2", 00:19:41.362 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:41.362 "is_configured": true, 00:19:41.362 "data_offset": 256, 00:19:41.362 "data_size": 7936 00:19:41.362 } 00:19:41.362 ] 00:19:41.362 }' 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.362 09:53:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.928 09:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.928 09:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.928 09:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.928 [2024-10-11 09:53:26.364646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.928 [2024-10-11 09:53:26.381566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:41.928 09:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.928 09:53:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:41.928 [2024-10-11 09:53:26.383427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.861 "name": "raid_bdev1", 00:19:42.861 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:42.861 "strip_size_kb": 0, 00:19:42.861 "state": "online", 00:19:42.861 "raid_level": "raid1", 00:19:42.861 "superblock": true, 00:19:42.861 "num_base_bdevs": 2, 00:19:42.861 "num_base_bdevs_discovered": 2, 00:19:42.861 "num_base_bdevs_operational": 2, 00:19:42.861 "process": { 00:19:42.861 "type": "rebuild", 00:19:42.861 "target": "spare", 00:19:42.861 "progress": { 00:19:42.861 "blocks": 2560, 00:19:42.861 "percent": 32 00:19:42.861 } 00:19:42.861 }, 00:19:42.861 "base_bdevs_list": [ 00:19:42.861 { 00:19:42.861 "name": "spare", 00:19:42.861 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:42.861 "is_configured": true, 00:19:42.861 "data_offset": 256, 00:19:42.861 "data_size": 7936 00:19:42.861 }, 00:19:42.861 { 00:19:42.861 "name": "BaseBdev2", 00:19:42.861 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:42.861 "is_configured": true, 00:19:42.861 "data_offset": 256, 00:19:42.861 "data_size": 7936 00:19:42.861 } 00:19:42.861 ] 00:19:42.861 }' 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.861 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.119 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.120 [2024-10-11 09:53:27.535749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.120 [2024-10-11 09:53:27.589472] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.120 [2024-10-11 09:53:27.589555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.120 [2024-10-11 09:53:27.589587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.120 [2024-10-11 09:53:27.589602] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.120 "name": "raid_bdev1", 00:19:43.120 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:43.120 "strip_size_kb": 0, 00:19:43.120 "state": "online", 00:19:43.120 "raid_level": "raid1", 00:19:43.120 "superblock": true, 00:19:43.120 "num_base_bdevs": 2, 00:19:43.120 "num_base_bdevs_discovered": 1, 00:19:43.120 "num_base_bdevs_operational": 1, 00:19:43.120 "base_bdevs_list": [ 00:19:43.120 { 00:19:43.120 "name": null, 00:19:43.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.120 "is_configured": false, 00:19:43.120 "data_offset": 0, 00:19:43.120 "data_size": 7936 00:19:43.120 }, 00:19:43.120 { 00:19:43.120 "name": "BaseBdev2", 00:19:43.120 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:43.120 "is_configured": true, 00:19:43.120 "data_offset": 256, 00:19:43.120 "data_size": 7936 00:19:43.120 } 00:19:43.120 ] 00:19:43.120 }' 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.120 09:53:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.687 "name": "raid_bdev1", 00:19:43.687 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:43.687 "strip_size_kb": 0, 00:19:43.687 "state": "online", 00:19:43.687 "raid_level": "raid1", 00:19:43.687 "superblock": true, 00:19:43.687 "num_base_bdevs": 2, 00:19:43.687 "num_base_bdevs_discovered": 1, 00:19:43.687 "num_base_bdevs_operational": 1, 00:19:43.687 "base_bdevs_list": [ 00:19:43.687 { 00:19:43.687 "name": null, 00:19:43.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.687 "is_configured": false, 00:19:43.687 "data_offset": 0, 00:19:43.687 "data_size": 7936 00:19:43.687 }, 00:19:43.687 { 00:19:43.687 "name": "BaseBdev2", 00:19:43.687 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:43.687 "is_configured": true, 00:19:43.687 "data_offset": 256, 00:19:43.687 "data_size": 7936 00:19:43.687 } 00:19:43.687 ] 00:19:43.687 }' 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.687 [2024-10-11 09:53:28.191208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.687 [2024-10-11 09:53:28.205581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.687 09:53:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:43.687 [2024-10-11 09:53:28.207998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.622 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.881 "name": "raid_bdev1", 00:19:44.881 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:44.881 "strip_size_kb": 0, 00:19:44.881 "state": "online", 00:19:44.881 "raid_level": "raid1", 00:19:44.881 "superblock": true, 00:19:44.881 "num_base_bdevs": 2, 00:19:44.881 "num_base_bdevs_discovered": 2, 00:19:44.881 "num_base_bdevs_operational": 2, 00:19:44.881 "process": { 00:19:44.881 "type": "rebuild", 00:19:44.881 "target": "spare", 00:19:44.881 "progress": { 00:19:44.881 "blocks": 2560, 00:19:44.881 "percent": 32 00:19:44.881 } 00:19:44.881 }, 00:19:44.881 "base_bdevs_list": [ 00:19:44.881 { 00:19:44.881 "name": "spare", 00:19:44.881 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:44.881 "is_configured": true, 00:19:44.881 "data_offset": 256, 00:19:44.881 "data_size": 7936 00:19:44.881 }, 00:19:44.881 { 00:19:44.881 "name": "BaseBdev2", 00:19:44.881 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:44.881 "is_configured": true, 00:19:44.881 "data_offset": 256, 00:19:44.881 "data_size": 7936 00:19:44.881 } 00:19:44.881 ] 00:19:44.881 }' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:44.881 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=725 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.881 "name": "raid_bdev1", 00:19:44.881 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:44.881 "strip_size_kb": 0, 00:19:44.881 "state": "online", 00:19:44.881 "raid_level": "raid1", 00:19:44.881 "superblock": true, 00:19:44.881 "num_base_bdevs": 2, 00:19:44.881 "num_base_bdevs_discovered": 2, 00:19:44.881 "num_base_bdevs_operational": 2, 00:19:44.881 "process": { 00:19:44.881 "type": "rebuild", 00:19:44.881 "target": "spare", 00:19:44.881 "progress": { 00:19:44.881 "blocks": 2816, 00:19:44.881 "percent": 35 00:19:44.881 } 00:19:44.881 }, 00:19:44.881 "base_bdevs_list": [ 00:19:44.881 { 00:19:44.881 "name": "spare", 00:19:44.881 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:44.881 "is_configured": true, 00:19:44.881 "data_offset": 256, 00:19:44.881 "data_size": 7936 00:19:44.881 }, 00:19:44.881 { 00:19:44.881 "name": "BaseBdev2", 00:19:44.881 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:44.881 "is_configured": true, 00:19:44.881 "data_offset": 256, 00:19:44.881 "data_size": 7936 00:19:44.881 } 00:19:44.881 ] 00:19:44.881 }' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.881 09:53:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.278 "name": "raid_bdev1", 00:19:46.278 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:46.278 "strip_size_kb": 0, 00:19:46.278 "state": "online", 00:19:46.278 "raid_level": "raid1", 00:19:46.278 "superblock": true, 00:19:46.278 "num_base_bdevs": 2, 00:19:46.278 "num_base_bdevs_discovered": 2, 00:19:46.278 "num_base_bdevs_operational": 2, 00:19:46.278 "process": { 00:19:46.278 "type": "rebuild", 00:19:46.278 "target": "spare", 00:19:46.278 "progress": { 00:19:46.278 "blocks": 5632, 00:19:46.278 "percent": 70 00:19:46.278 } 00:19:46.278 }, 00:19:46.278 "base_bdevs_list": [ 00:19:46.278 { 00:19:46.278 "name": "spare", 00:19:46.278 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:46.278 "is_configured": true, 00:19:46.278 "data_offset": 256, 00:19:46.278 "data_size": 7936 00:19:46.278 }, 00:19:46.278 { 00:19:46.278 "name": "BaseBdev2", 00:19:46.278 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:46.278 "is_configured": true, 00:19:46.278 "data_offset": 256, 00:19:46.278 "data_size": 7936 00:19:46.278 } 00:19:46.278 ] 00:19:46.278 }' 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.278 09:53:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.845 [2024-10-11 09:53:31.323731] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.845 [2024-10-11 09:53:31.323848] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.845 [2024-10-11 09:53:31.323960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.103 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.103 "name": "raid_bdev1", 00:19:47.103 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:47.103 "strip_size_kb": 0, 00:19:47.103 "state": "online", 00:19:47.103 "raid_level": "raid1", 00:19:47.103 "superblock": true, 00:19:47.103 "num_base_bdevs": 2, 00:19:47.103 "num_base_bdevs_discovered": 2, 00:19:47.103 "num_base_bdevs_operational": 2, 00:19:47.103 "base_bdevs_list": [ 00:19:47.103 { 00:19:47.103 "name": "spare", 00:19:47.103 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:47.103 "is_configured": true, 00:19:47.103 "data_offset": 256, 00:19:47.103 "data_size": 7936 00:19:47.103 }, 00:19:47.103 { 00:19:47.104 "name": "BaseBdev2", 00:19:47.104 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:47.104 "is_configured": true, 00:19:47.104 "data_offset": 256, 00:19:47.104 "data_size": 7936 00:19:47.104 } 00:19:47.104 ] 00:19:47.104 }' 00:19:47.104 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.104 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.104 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.362 "name": "raid_bdev1", 00:19:47.362 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:47.362 "strip_size_kb": 0, 00:19:47.362 "state": "online", 00:19:47.362 "raid_level": "raid1", 00:19:47.362 "superblock": true, 00:19:47.362 "num_base_bdevs": 2, 00:19:47.362 "num_base_bdevs_discovered": 2, 00:19:47.362 "num_base_bdevs_operational": 2, 00:19:47.362 "base_bdevs_list": [ 00:19:47.362 { 00:19:47.362 "name": "spare", 00:19:47.362 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:47.362 "is_configured": true, 00:19:47.362 "data_offset": 256, 00:19:47.362 "data_size": 7936 00:19:47.362 }, 00:19:47.362 { 00:19:47.362 "name": "BaseBdev2", 00:19:47.362 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:47.362 "is_configured": true, 00:19:47.362 "data_offset": 256, 00:19:47.362 "data_size": 7936 00:19:47.362 } 00:19:47.362 ] 00:19:47.362 }' 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.362 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.363 "name": "raid_bdev1", 00:19:47.363 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:47.363 "strip_size_kb": 0, 00:19:47.363 "state": "online", 00:19:47.363 "raid_level": "raid1", 00:19:47.363 "superblock": true, 00:19:47.363 "num_base_bdevs": 2, 00:19:47.363 "num_base_bdevs_discovered": 2, 00:19:47.363 "num_base_bdevs_operational": 2, 00:19:47.363 "base_bdevs_list": [ 00:19:47.363 { 00:19:47.363 "name": "spare", 00:19:47.363 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:47.363 "is_configured": true, 00:19:47.363 "data_offset": 256, 00:19:47.363 "data_size": 7936 00:19:47.363 }, 00:19:47.363 { 00:19:47.363 "name": "BaseBdev2", 00:19:47.363 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:47.363 "is_configured": true, 00:19:47.363 "data_offset": 256, 00:19:47.363 "data_size": 7936 00:19:47.363 } 00:19:47.363 ] 00:19:47.363 }' 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.363 09:53:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.929 [2024-10-11 09:53:32.320854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.929 [2024-10-11 09:53:32.320888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.929 [2024-10-11 09:53:32.320976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.929 [2024-10-11 09:53:32.321043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.929 [2024-10-11 09:53:32.321053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:47.929 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:48.187 /dev/nbd0 00:19:48.187 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.188 1+0 records in 00:19:48.188 1+0 records out 00:19:48.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434428 s, 9.4 MB/s 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.188 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:48.445 /dev/nbd1 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:48.445 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.446 1+0 records in 00:19:48.446 1+0 records out 00:19:48.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281 s, 14.6 MB/s 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.446 09:53:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.446 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.704 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.962 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.963 [2024-10-11 09:53:33.538493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:48.963 [2024-10-11 09:53:33.538563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.963 [2024-10-11 09:53:33.538589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:48.963 [2024-10-11 09:53:33.538598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.963 [2024-10-11 09:53:33.540631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.963 [2024-10-11 09:53:33.540673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:48.963 [2024-10-11 09:53:33.540754] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:48.963 [2024-10-11 09:53:33.540813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.963 [2024-10-11 09:53:33.540944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:48.963 spare 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.963 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.221 [2024-10-11 09:53:33.640846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:49.221 [2024-10-11 09:53:33.640883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:49.221 [2024-10-11 09:53:33.641000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:49.221 [2024-10-11 09:53:33.641187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:49.221 [2024-10-11 09:53:33.641205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:49.221 [2024-10-11 09:53:33.641343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.221 "name": "raid_bdev1", 00:19:49.221 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:49.221 "strip_size_kb": 0, 00:19:49.221 "state": "online", 00:19:49.221 "raid_level": "raid1", 00:19:49.221 "superblock": true, 00:19:49.221 "num_base_bdevs": 2, 00:19:49.221 "num_base_bdevs_discovered": 2, 00:19:49.221 "num_base_bdevs_operational": 2, 00:19:49.221 "base_bdevs_list": [ 00:19:49.221 { 00:19:49.221 "name": "spare", 00:19:49.221 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:49.221 "is_configured": true, 00:19:49.221 "data_offset": 256, 00:19:49.221 "data_size": 7936 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "name": "BaseBdev2", 00:19:49.221 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:49.221 "is_configured": true, 00:19:49.221 "data_offset": 256, 00:19:49.221 "data_size": 7936 00:19:49.221 } 00:19:49.221 ] 00:19:49.221 }' 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.221 09:53:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.479 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.479 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.479 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.479 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.480 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.480 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.480 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.480 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.480 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.480 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.738 "name": "raid_bdev1", 00:19:49.738 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:49.738 "strip_size_kb": 0, 00:19:49.738 "state": "online", 00:19:49.738 "raid_level": "raid1", 00:19:49.738 "superblock": true, 00:19:49.738 "num_base_bdevs": 2, 00:19:49.738 "num_base_bdevs_discovered": 2, 00:19:49.738 "num_base_bdevs_operational": 2, 00:19:49.738 "base_bdevs_list": [ 00:19:49.738 { 00:19:49.738 "name": "spare", 00:19:49.738 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:49.738 "is_configured": true, 00:19:49.738 "data_offset": 256, 00:19:49.738 "data_size": 7936 00:19:49.738 }, 00:19:49.738 { 00:19:49.738 "name": "BaseBdev2", 00:19:49.738 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:49.738 "is_configured": true, 00:19:49.738 "data_offset": 256, 00:19:49.738 "data_size": 7936 00:19:49.738 } 00:19:49.738 ] 00:19:49.738 }' 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 [2024-10-11 09:53:34.245362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.738 "name": "raid_bdev1", 00:19:49.738 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:49.738 "strip_size_kb": 0, 00:19:49.738 "state": "online", 00:19:49.738 "raid_level": "raid1", 00:19:49.738 "superblock": true, 00:19:49.738 "num_base_bdevs": 2, 00:19:49.738 "num_base_bdevs_discovered": 1, 00:19:49.738 "num_base_bdevs_operational": 1, 00:19:49.738 "base_bdevs_list": [ 00:19:49.738 { 00:19:49.738 "name": null, 00:19:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.738 "is_configured": false, 00:19:49.738 "data_offset": 0, 00:19:49.738 "data_size": 7936 00:19:49.738 }, 00:19:49.738 { 00:19:49.738 "name": "BaseBdev2", 00:19:49.738 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:49.738 "is_configured": true, 00:19:49.738 "data_offset": 256, 00:19:49.738 "data_size": 7936 00:19:49.738 } 00:19:49.738 ] 00:19:49.738 }' 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.738 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.304 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.304 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.304 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.304 [2024-10-11 09:53:34.672660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.304 [2024-10-11 09:53:34.672903] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:50.304 [2024-10-11 09:53:34.672928] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:50.304 [2024-10-11 09:53:34.672970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.304 [2024-10-11 09:53:34.687337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:50.304 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.304 09:53:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:50.304 [2024-10-11 09:53:34.689361] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.239 "name": "raid_bdev1", 00:19:51.239 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:51.239 "strip_size_kb": 0, 00:19:51.239 "state": "online", 00:19:51.239 "raid_level": "raid1", 00:19:51.239 "superblock": true, 00:19:51.239 "num_base_bdevs": 2, 00:19:51.239 "num_base_bdevs_discovered": 2, 00:19:51.239 "num_base_bdevs_operational": 2, 00:19:51.239 "process": { 00:19:51.239 "type": "rebuild", 00:19:51.239 "target": "spare", 00:19:51.239 "progress": { 00:19:51.239 "blocks": 2560, 00:19:51.239 "percent": 32 00:19:51.239 } 00:19:51.239 }, 00:19:51.239 "base_bdevs_list": [ 00:19:51.239 { 00:19:51.239 "name": "spare", 00:19:51.239 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:51.239 "is_configured": true, 00:19:51.239 "data_offset": 256, 00:19:51.239 "data_size": 7936 00:19:51.239 }, 00:19:51.239 { 00:19:51.239 "name": "BaseBdev2", 00:19:51.239 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:51.239 "is_configured": true, 00:19:51.239 "data_offset": 256, 00:19:51.239 "data_size": 7936 00:19:51.239 } 00:19:51.239 ] 00:19:51.239 }' 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.239 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.239 [2024-10-11 09:53:35.821901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.498 [2024-10-11 09:53:35.895577] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:51.498 [2024-10-11 09:53:35.895694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.498 [2024-10-11 09:53:35.895724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.498 [2024-10-11 09:53:35.895744] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.498 "name": "raid_bdev1", 00:19:51.498 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:51.498 "strip_size_kb": 0, 00:19:51.498 "state": "online", 00:19:51.498 "raid_level": "raid1", 00:19:51.498 "superblock": true, 00:19:51.498 "num_base_bdevs": 2, 00:19:51.498 "num_base_bdevs_discovered": 1, 00:19:51.498 "num_base_bdevs_operational": 1, 00:19:51.498 "base_bdevs_list": [ 00:19:51.498 { 00:19:51.498 "name": null, 00:19:51.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.498 "is_configured": false, 00:19:51.498 "data_offset": 0, 00:19:51.498 "data_size": 7936 00:19:51.498 }, 00:19:51.498 { 00:19:51.498 "name": "BaseBdev2", 00:19:51.498 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:51.498 "is_configured": true, 00:19:51.498 "data_offset": 256, 00:19:51.498 "data_size": 7936 00:19:51.498 } 00:19:51.498 ] 00:19:51.498 }' 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.498 09:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.756 09:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:51.756 09:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.756 09:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.756 [2024-10-11 09:53:36.301077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:51.756 [2024-10-11 09:53:36.301145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.756 [2024-10-11 09:53:36.301172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:51.756 [2024-10-11 09:53:36.301183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.756 [2024-10-11 09:53:36.301440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.756 [2024-10-11 09:53:36.301467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:51.756 [2024-10-11 09:53:36.301530] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:51.756 [2024-10-11 09:53:36.301545] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:51.756 [2024-10-11 09:53:36.301555] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:51.756 [2024-10-11 09:53:36.301586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.756 [2024-10-11 09:53:36.316715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:51.756 spare 00:19:51.756 09:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.756 09:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:51.756 [2024-10-11 09:53:36.318532] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.132 "name": "raid_bdev1", 00:19:53.132 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:53.132 "strip_size_kb": 0, 00:19:53.132 "state": "online", 00:19:53.132 "raid_level": "raid1", 00:19:53.132 "superblock": true, 00:19:53.132 "num_base_bdevs": 2, 00:19:53.132 "num_base_bdevs_discovered": 2, 00:19:53.132 "num_base_bdevs_operational": 2, 00:19:53.132 "process": { 00:19:53.132 "type": "rebuild", 00:19:53.132 "target": "spare", 00:19:53.132 "progress": { 00:19:53.132 "blocks": 2560, 00:19:53.132 "percent": 32 00:19:53.132 } 00:19:53.132 }, 00:19:53.132 "base_bdevs_list": [ 00:19:53.132 { 00:19:53.132 "name": "spare", 00:19:53.132 "uuid": "ca158b31-a6c1-54b5-809d-a5ba0d428cf2", 00:19:53.132 "is_configured": true, 00:19:53.132 "data_offset": 256, 00:19:53.132 "data_size": 7936 00:19:53.132 }, 00:19:53.132 { 00:19:53.132 "name": "BaseBdev2", 00:19:53.132 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:53.132 "is_configured": true, 00:19:53.132 "data_offset": 256, 00:19:53.132 "data_size": 7936 00:19:53.132 } 00:19:53.132 ] 00:19:53.132 }' 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.132 [2024-10-11 09:53:37.474587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.132 [2024-10-11 09:53:37.524123] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:53.132 [2024-10-11 09:53:37.524245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.132 [2024-10-11 09:53:37.524266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.132 [2024-10-11 09:53:37.524273] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.132 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.132 "name": "raid_bdev1", 00:19:53.132 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:53.132 "strip_size_kb": 0, 00:19:53.132 "state": "online", 00:19:53.132 "raid_level": "raid1", 00:19:53.132 "superblock": true, 00:19:53.132 "num_base_bdevs": 2, 00:19:53.133 "num_base_bdevs_discovered": 1, 00:19:53.133 "num_base_bdevs_operational": 1, 00:19:53.133 "base_bdevs_list": [ 00:19:53.133 { 00:19:53.133 "name": null, 00:19:53.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.133 "is_configured": false, 00:19:53.133 "data_offset": 0, 00:19:53.133 "data_size": 7936 00:19:53.133 }, 00:19:53.133 { 00:19:53.133 "name": "BaseBdev2", 00:19:53.133 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:53.133 "is_configured": true, 00:19:53.133 "data_offset": 256, 00:19:53.133 "data_size": 7936 00:19:53.133 } 00:19:53.133 ] 00:19:53.133 }' 00:19:53.133 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.133 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.393 09:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.393 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.393 "name": "raid_bdev1", 00:19:53.393 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:53.393 "strip_size_kb": 0, 00:19:53.393 "state": "online", 00:19:53.393 "raid_level": "raid1", 00:19:53.393 "superblock": true, 00:19:53.393 "num_base_bdevs": 2, 00:19:53.393 "num_base_bdevs_discovered": 1, 00:19:53.393 "num_base_bdevs_operational": 1, 00:19:53.393 "base_bdevs_list": [ 00:19:53.393 { 00:19:53.393 "name": null, 00:19:53.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.393 "is_configured": false, 00:19:53.393 "data_offset": 0, 00:19:53.393 "data_size": 7936 00:19:53.393 }, 00:19:53.393 { 00:19:53.393 "name": "BaseBdev2", 00:19:53.393 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:53.393 "is_configured": true, 00:19:53.393 "data_offset": 256, 00:19:53.393 "data_size": 7936 00:19:53.393 } 00:19:53.393 ] 00:19:53.393 }' 00:19:53.393 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.658 [2024-10-11 09:53:38.129046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:53.658 [2024-10-11 09:53:38.129185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.658 [2024-10-11 09:53:38.129219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:53.658 [2024-10-11 09:53:38.129228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.658 [2024-10-11 09:53:38.129463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.658 [2024-10-11 09:53:38.129475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:53.658 [2024-10-11 09:53:38.129533] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:53.658 [2024-10-11 09:53:38.129546] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.658 [2024-10-11 09:53:38.129557] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:53.658 [2024-10-11 09:53:38.129569] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:53.658 BaseBdev1 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.658 09:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.601 "name": "raid_bdev1", 00:19:54.601 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:54.601 "strip_size_kb": 0, 00:19:54.601 "state": "online", 00:19:54.601 "raid_level": "raid1", 00:19:54.601 "superblock": true, 00:19:54.601 "num_base_bdevs": 2, 00:19:54.601 "num_base_bdevs_discovered": 1, 00:19:54.601 "num_base_bdevs_operational": 1, 00:19:54.601 "base_bdevs_list": [ 00:19:54.601 { 00:19:54.601 "name": null, 00:19:54.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.601 "is_configured": false, 00:19:54.601 "data_offset": 0, 00:19:54.601 "data_size": 7936 00:19:54.601 }, 00:19:54.601 { 00:19:54.601 "name": "BaseBdev2", 00:19:54.601 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:54.601 "is_configured": true, 00:19:54.601 "data_offset": 256, 00:19:54.601 "data_size": 7936 00:19:54.601 } 00:19:54.601 ] 00:19:54.601 }' 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.601 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.170 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.170 "name": "raid_bdev1", 00:19:55.170 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:55.170 "strip_size_kb": 0, 00:19:55.171 "state": "online", 00:19:55.171 "raid_level": "raid1", 00:19:55.171 "superblock": true, 00:19:55.171 "num_base_bdevs": 2, 00:19:55.171 "num_base_bdevs_discovered": 1, 00:19:55.171 "num_base_bdevs_operational": 1, 00:19:55.171 "base_bdevs_list": [ 00:19:55.171 { 00:19:55.171 "name": null, 00:19:55.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.171 "is_configured": false, 00:19:55.171 "data_offset": 0, 00:19:55.171 "data_size": 7936 00:19:55.171 }, 00:19:55.171 { 00:19:55.171 "name": "BaseBdev2", 00:19:55.171 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:55.171 "is_configured": true, 00:19:55.171 "data_offset": 256, 00:19:55.171 "data_size": 7936 00:19:55.171 } 00:19:55.171 ] 00:19:55.171 }' 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.171 [2024-10-11 09:53:39.750414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.171 [2024-10-11 09:53:39.750639] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:55.171 [2024-10-11 09:53:39.750661] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:55.171 request: 00:19:55.171 { 00:19:55.171 "base_bdev": "BaseBdev1", 00:19:55.171 "raid_bdev": "raid_bdev1", 00:19:55.171 "method": "bdev_raid_add_base_bdev", 00:19:55.171 "req_id": 1 00:19:55.171 } 00:19:55.171 Got JSON-RPC error response 00:19:55.171 response: 00:19:55.171 { 00:19:55.171 "code": -22, 00:19:55.171 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:55.171 } 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:55.171 09:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.550 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.550 "name": "raid_bdev1", 00:19:56.550 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:56.550 "strip_size_kb": 0, 00:19:56.550 "state": "online", 00:19:56.550 "raid_level": "raid1", 00:19:56.551 "superblock": true, 00:19:56.551 "num_base_bdevs": 2, 00:19:56.551 "num_base_bdevs_discovered": 1, 00:19:56.551 "num_base_bdevs_operational": 1, 00:19:56.551 "base_bdevs_list": [ 00:19:56.551 { 00:19:56.551 "name": null, 00:19:56.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.551 "is_configured": false, 00:19:56.551 "data_offset": 0, 00:19:56.551 "data_size": 7936 00:19:56.551 }, 00:19:56.551 { 00:19:56.551 "name": "BaseBdev2", 00:19:56.551 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:56.551 "is_configured": true, 00:19:56.551 "data_offset": 256, 00:19:56.551 "data_size": 7936 00:19:56.551 } 00:19:56.551 ] 00:19:56.551 }' 00:19:56.551 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.551 09:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.810 "name": "raid_bdev1", 00:19:56.810 "uuid": "02f5c7db-5f73-46e8-9551-e1661cfda121", 00:19:56.810 "strip_size_kb": 0, 00:19:56.810 "state": "online", 00:19:56.810 "raid_level": "raid1", 00:19:56.810 "superblock": true, 00:19:56.810 "num_base_bdevs": 2, 00:19:56.810 "num_base_bdevs_discovered": 1, 00:19:56.810 "num_base_bdevs_operational": 1, 00:19:56.810 "base_bdevs_list": [ 00:19:56.810 { 00:19:56.810 "name": null, 00:19:56.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.810 "is_configured": false, 00:19:56.810 "data_offset": 0, 00:19:56.810 "data_size": 7936 00:19:56.810 }, 00:19:56.810 { 00:19:56.810 "name": "BaseBdev2", 00:19:56.810 "uuid": "cc3e6108-050c-5220-a14d-c0038c4add4f", 00:19:56.810 "is_configured": true, 00:19:56.810 "data_offset": 256, 00:19:56.810 "data_size": 7936 00:19:56.810 } 00:19:56.810 ] 00:19:56.810 }' 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.810 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88339 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88339 ']' 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88339 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88339 00:19:56.811 killing process with pid 88339 00:19:56.811 Received shutdown signal, test time was about 60.000000 seconds 00:19:56.811 00:19:56.811 Latency(us) 00:19:56.811 [2024-10-11T09:53:41.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.811 [2024-10-11T09:53:41.443Z] =================================================================================================================== 00:19:56.811 [2024-10-11T09:53:41.443Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88339' 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88339 00:19:56.811 09:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88339 00:19:56.811 [2024-10-11 09:53:41.426122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:56.811 [2024-10-11 09:53:41.426250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.811 [2024-10-11 09:53:41.426318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.811 [2024-10-11 09:53:41.426329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:57.379 [2024-10-11 09:53:41.732071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.317 09:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:58.317 00:19:58.317 real 0m19.768s 00:19:58.317 user 0m25.581s 00:19:58.317 sys 0m2.743s 00:19:58.317 09:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.317 ************************************ 00:19:58.317 END TEST raid_rebuild_test_sb_md_separate 00:19:58.317 ************************************ 00:19:58.317 09:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.317 09:53:42 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:58.317 09:53:42 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:58.317 09:53:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:58.317 09:53:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.317 09:53:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.317 ************************************ 00:19:58.317 START TEST raid_state_function_test_sb_md_interleaved 00:19:58.317 ************************************ 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89024 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89024' 00:19:58.317 Process raid pid: 89024 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89024 00:19:58.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89024 ']' 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.317 09:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:58.576 [2024-10-11 09:53:42.947855] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:58.576 [2024-10-11 09:53:42.948142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.576 [2024-10-11 09:53:43.119911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.836 [2024-10-11 09:53:43.245504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.095 [2024-10-11 09:53:43.477068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.095 [2024-10-11 09:53:43.477113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.355 [2024-10-11 09:53:43.774367] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.355 [2024-10-11 09:53:43.774499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.355 [2024-10-11 09:53:43.774516] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.355 [2024-10-11 09:53:43.774528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.355 "name": "Existed_Raid", 00:19:59.355 "uuid": "546cfbde-ccd9-415c-94ae-8970936e0bff", 00:19:59.355 "strip_size_kb": 0, 00:19:59.355 "state": "configuring", 00:19:59.355 "raid_level": "raid1", 00:19:59.355 "superblock": true, 00:19:59.355 "num_base_bdevs": 2, 00:19:59.355 "num_base_bdevs_discovered": 0, 00:19:59.355 "num_base_bdevs_operational": 2, 00:19:59.355 "base_bdevs_list": [ 00:19:59.355 { 00:19:59.355 "name": "BaseBdev1", 00:19:59.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.355 "is_configured": false, 00:19:59.355 "data_offset": 0, 00:19:59.355 "data_size": 0 00:19:59.355 }, 00:19:59.355 { 00:19:59.355 "name": "BaseBdev2", 00:19:59.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.355 "is_configured": false, 00:19:59.355 "data_offset": 0, 00:19:59.355 "data_size": 0 00:19:59.355 } 00:19:59.355 ] 00:19:59.355 }' 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.355 09:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.615 [2024-10-11 09:53:44.229522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.615 [2024-10-11 09:53:44.229560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.615 [2024-10-11 09:53:44.237509] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.615 [2024-10-11 09:53:44.237553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.615 [2024-10-11 09:53:44.237563] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.615 [2024-10-11 09:53:44.237575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.615 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.875 [2024-10-11 09:53:44.285376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.875 BaseBdev1 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.875 [ 00:19:59.875 { 00:19:59.875 "name": "BaseBdev1", 00:19:59.875 "aliases": [ 00:19:59.875 "56794f8d-35b8-4943-98f8-f1c8bb57c39c" 00:19:59.875 ], 00:19:59.875 "product_name": "Malloc disk", 00:19:59.875 "block_size": 4128, 00:19:59.875 "num_blocks": 8192, 00:19:59.875 "uuid": "56794f8d-35b8-4943-98f8-f1c8bb57c39c", 00:19:59.875 "md_size": 32, 00:19:59.875 "md_interleave": true, 00:19:59.875 "dif_type": 0, 00:19:59.875 "assigned_rate_limits": { 00:19:59.875 "rw_ios_per_sec": 0, 00:19:59.875 "rw_mbytes_per_sec": 0, 00:19:59.875 "r_mbytes_per_sec": 0, 00:19:59.875 "w_mbytes_per_sec": 0 00:19:59.875 }, 00:19:59.875 "claimed": true, 00:19:59.875 "claim_type": "exclusive_write", 00:19:59.875 "zoned": false, 00:19:59.875 "supported_io_types": { 00:19:59.875 "read": true, 00:19:59.875 "write": true, 00:19:59.875 "unmap": true, 00:19:59.875 "flush": true, 00:19:59.875 "reset": true, 00:19:59.875 "nvme_admin": false, 00:19:59.875 "nvme_io": false, 00:19:59.875 "nvme_io_md": false, 00:19:59.875 "write_zeroes": true, 00:19:59.875 "zcopy": true, 00:19:59.875 "get_zone_info": false, 00:19:59.875 "zone_management": false, 00:19:59.875 "zone_append": false, 00:19:59.875 "compare": false, 00:19:59.875 "compare_and_write": false, 00:19:59.875 "abort": true, 00:19:59.875 "seek_hole": false, 00:19:59.875 "seek_data": false, 00:19:59.875 "copy": true, 00:19:59.875 "nvme_iov_md": false 00:19:59.875 }, 00:19:59.875 "memory_domains": [ 00:19:59.875 { 00:19:59.875 "dma_device_id": "system", 00:19:59.875 "dma_device_type": 1 00:19:59.875 }, 00:19:59.875 { 00:19:59.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.875 "dma_device_type": 2 00:19:59.875 } 00:19:59.875 ], 00:19:59.875 "driver_specific": {} 00:19:59.875 } 00:19:59.875 ] 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.875 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.876 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.876 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.876 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.876 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.876 "name": "Existed_Raid", 00:19:59.876 "uuid": "73954a6b-10f0-44e7-b47b-48ffb965be23", 00:19:59.876 "strip_size_kb": 0, 00:19:59.876 "state": "configuring", 00:19:59.876 "raid_level": "raid1", 00:19:59.876 "superblock": true, 00:19:59.876 "num_base_bdevs": 2, 00:19:59.876 "num_base_bdevs_discovered": 1, 00:19:59.876 "num_base_bdevs_operational": 2, 00:19:59.876 "base_bdevs_list": [ 00:19:59.876 { 00:19:59.876 "name": "BaseBdev1", 00:19:59.876 "uuid": "56794f8d-35b8-4943-98f8-f1c8bb57c39c", 00:19:59.876 "is_configured": true, 00:19:59.876 "data_offset": 256, 00:19:59.876 "data_size": 7936 00:19:59.876 }, 00:19:59.876 { 00:19:59.876 "name": "BaseBdev2", 00:19:59.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.876 "is_configured": false, 00:19:59.876 "data_offset": 0, 00:19:59.876 "data_size": 0 00:19:59.876 } 00:19:59.876 ] 00:19:59.876 }' 00:19:59.876 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.876 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.445 [2024-10-11 09:53:44.776692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.445 [2024-10-11 09:53:44.776834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.445 [2024-10-11 09:53:44.788726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.445 [2024-10-11 09:53:44.790658] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.445 [2024-10-11 09:53:44.790753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.445 "name": "Existed_Raid", 00:20:00.445 "uuid": "e22cbc19-cf2a-47c2-a680-3ae35f3db6f0", 00:20:00.445 "strip_size_kb": 0, 00:20:00.445 "state": "configuring", 00:20:00.445 "raid_level": "raid1", 00:20:00.445 "superblock": true, 00:20:00.445 "num_base_bdevs": 2, 00:20:00.445 "num_base_bdevs_discovered": 1, 00:20:00.445 "num_base_bdevs_operational": 2, 00:20:00.445 "base_bdevs_list": [ 00:20:00.445 { 00:20:00.445 "name": "BaseBdev1", 00:20:00.445 "uuid": "56794f8d-35b8-4943-98f8-f1c8bb57c39c", 00:20:00.445 "is_configured": true, 00:20:00.445 "data_offset": 256, 00:20:00.445 "data_size": 7936 00:20:00.445 }, 00:20:00.445 { 00:20:00.445 "name": "BaseBdev2", 00:20:00.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.445 "is_configured": false, 00:20:00.445 "data_offset": 0, 00:20:00.445 "data_size": 0 00:20:00.445 } 00:20:00.445 ] 00:20:00.445 }' 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.445 09:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.705 [2024-10-11 09:53:45.274493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.705 [2024-10-11 09:53:45.274850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:00.705 [2024-10-11 09:53:45.274903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:00.705 [2024-10-11 09:53:45.275013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:00.705 [2024-10-11 09:53:45.275117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:00.705 [2024-10-11 09:53:45.275155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:00.705 [2024-10-11 09:53:45.275260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.705 BaseBdev2 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.705 [ 00:20:00.705 { 00:20:00.705 "name": "BaseBdev2", 00:20:00.705 "aliases": [ 00:20:00.705 "6ea38a19-3e25-44f9-ac76-6696e057d7e2" 00:20:00.705 ], 00:20:00.705 "product_name": "Malloc disk", 00:20:00.705 "block_size": 4128, 00:20:00.705 "num_blocks": 8192, 00:20:00.705 "uuid": "6ea38a19-3e25-44f9-ac76-6696e057d7e2", 00:20:00.705 "md_size": 32, 00:20:00.705 "md_interleave": true, 00:20:00.705 "dif_type": 0, 00:20:00.705 "assigned_rate_limits": { 00:20:00.705 "rw_ios_per_sec": 0, 00:20:00.705 "rw_mbytes_per_sec": 0, 00:20:00.705 "r_mbytes_per_sec": 0, 00:20:00.705 "w_mbytes_per_sec": 0 00:20:00.705 }, 00:20:00.705 "claimed": true, 00:20:00.705 "claim_type": "exclusive_write", 00:20:00.705 "zoned": false, 00:20:00.705 "supported_io_types": { 00:20:00.705 "read": true, 00:20:00.705 "write": true, 00:20:00.705 "unmap": true, 00:20:00.705 "flush": true, 00:20:00.705 "reset": true, 00:20:00.705 "nvme_admin": false, 00:20:00.705 "nvme_io": false, 00:20:00.705 "nvme_io_md": false, 00:20:00.705 "write_zeroes": true, 00:20:00.705 "zcopy": true, 00:20:00.705 "get_zone_info": false, 00:20:00.705 "zone_management": false, 00:20:00.705 "zone_append": false, 00:20:00.705 "compare": false, 00:20:00.705 "compare_and_write": false, 00:20:00.705 "abort": true, 00:20:00.705 "seek_hole": false, 00:20:00.705 "seek_data": false, 00:20:00.705 "copy": true, 00:20:00.705 "nvme_iov_md": false 00:20:00.705 }, 00:20:00.705 "memory_domains": [ 00:20:00.705 { 00:20:00.705 "dma_device_id": "system", 00:20:00.705 "dma_device_type": 1 00:20:00.705 }, 00:20:00.705 { 00:20:00.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.705 "dma_device_type": 2 00:20:00.705 } 00:20:00.705 ], 00:20:00.705 "driver_specific": {} 00:20:00.705 } 00:20:00.705 ] 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.705 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.706 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.965 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.965 "name": "Existed_Raid", 00:20:00.965 "uuid": "e22cbc19-cf2a-47c2-a680-3ae35f3db6f0", 00:20:00.965 "strip_size_kb": 0, 00:20:00.965 "state": "online", 00:20:00.965 "raid_level": "raid1", 00:20:00.965 "superblock": true, 00:20:00.965 "num_base_bdevs": 2, 00:20:00.965 "num_base_bdevs_discovered": 2, 00:20:00.965 "num_base_bdevs_operational": 2, 00:20:00.965 "base_bdevs_list": [ 00:20:00.965 { 00:20:00.965 "name": "BaseBdev1", 00:20:00.965 "uuid": "56794f8d-35b8-4943-98f8-f1c8bb57c39c", 00:20:00.965 "is_configured": true, 00:20:00.965 "data_offset": 256, 00:20:00.965 "data_size": 7936 00:20:00.965 }, 00:20:00.965 { 00:20:00.965 "name": "BaseBdev2", 00:20:00.965 "uuid": "6ea38a19-3e25-44f9-ac76-6696e057d7e2", 00:20:00.965 "is_configured": true, 00:20:00.965 "data_offset": 256, 00:20:00.965 "data_size": 7936 00:20:00.965 } 00:20:00.965 ] 00:20:00.965 }' 00:20:00.965 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.965 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:01.225 [2024-10-11 09:53:45.774019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.225 "name": "Existed_Raid", 00:20:01.225 "aliases": [ 00:20:01.225 "e22cbc19-cf2a-47c2-a680-3ae35f3db6f0" 00:20:01.225 ], 00:20:01.225 "product_name": "Raid Volume", 00:20:01.225 "block_size": 4128, 00:20:01.225 "num_blocks": 7936, 00:20:01.225 "uuid": "e22cbc19-cf2a-47c2-a680-3ae35f3db6f0", 00:20:01.225 "md_size": 32, 00:20:01.225 "md_interleave": true, 00:20:01.225 "dif_type": 0, 00:20:01.225 "assigned_rate_limits": { 00:20:01.225 "rw_ios_per_sec": 0, 00:20:01.225 "rw_mbytes_per_sec": 0, 00:20:01.225 "r_mbytes_per_sec": 0, 00:20:01.225 "w_mbytes_per_sec": 0 00:20:01.225 }, 00:20:01.225 "claimed": false, 00:20:01.225 "zoned": false, 00:20:01.225 "supported_io_types": { 00:20:01.225 "read": true, 00:20:01.225 "write": true, 00:20:01.225 "unmap": false, 00:20:01.225 "flush": false, 00:20:01.225 "reset": true, 00:20:01.225 "nvme_admin": false, 00:20:01.225 "nvme_io": false, 00:20:01.225 "nvme_io_md": false, 00:20:01.225 "write_zeroes": true, 00:20:01.225 "zcopy": false, 00:20:01.225 "get_zone_info": false, 00:20:01.225 "zone_management": false, 00:20:01.225 "zone_append": false, 00:20:01.225 "compare": false, 00:20:01.225 "compare_and_write": false, 00:20:01.225 "abort": false, 00:20:01.225 "seek_hole": false, 00:20:01.225 "seek_data": false, 00:20:01.225 "copy": false, 00:20:01.225 "nvme_iov_md": false 00:20:01.225 }, 00:20:01.225 "memory_domains": [ 00:20:01.225 { 00:20:01.225 "dma_device_id": "system", 00:20:01.225 "dma_device_type": 1 00:20:01.225 }, 00:20:01.225 { 00:20:01.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.225 "dma_device_type": 2 00:20:01.225 }, 00:20:01.225 { 00:20:01.225 "dma_device_id": "system", 00:20:01.225 "dma_device_type": 1 00:20:01.225 }, 00:20:01.225 { 00:20:01.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.225 "dma_device_type": 2 00:20:01.225 } 00:20:01.225 ], 00:20:01.225 "driver_specific": { 00:20:01.225 "raid": { 00:20:01.225 "uuid": "e22cbc19-cf2a-47c2-a680-3ae35f3db6f0", 00:20:01.225 "strip_size_kb": 0, 00:20:01.225 "state": "online", 00:20:01.225 "raid_level": "raid1", 00:20:01.225 "superblock": true, 00:20:01.225 "num_base_bdevs": 2, 00:20:01.225 "num_base_bdevs_discovered": 2, 00:20:01.225 "num_base_bdevs_operational": 2, 00:20:01.225 "base_bdevs_list": [ 00:20:01.225 { 00:20:01.225 "name": "BaseBdev1", 00:20:01.225 "uuid": "56794f8d-35b8-4943-98f8-f1c8bb57c39c", 00:20:01.225 "is_configured": true, 00:20:01.225 "data_offset": 256, 00:20:01.225 "data_size": 7936 00:20:01.225 }, 00:20:01.225 { 00:20:01.225 "name": "BaseBdev2", 00:20:01.225 "uuid": "6ea38a19-3e25-44f9-ac76-6696e057d7e2", 00:20:01.225 "is_configured": true, 00:20:01.225 "data_offset": 256, 00:20:01.225 "data_size": 7936 00:20:01.225 } 00:20:01.225 ] 00:20:01.225 } 00:20:01.225 } 00:20:01.225 }' 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:01.225 BaseBdev2' 00:20:01.225 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.484 09:53:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.484 [2024-10-11 09:53:45.993438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.484 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.743 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.743 "name": "Existed_Raid", 00:20:01.743 "uuid": "e22cbc19-cf2a-47c2-a680-3ae35f3db6f0", 00:20:01.743 "strip_size_kb": 0, 00:20:01.743 "state": "online", 00:20:01.743 "raid_level": "raid1", 00:20:01.743 "superblock": true, 00:20:01.743 "num_base_bdevs": 2, 00:20:01.743 "num_base_bdevs_discovered": 1, 00:20:01.743 "num_base_bdevs_operational": 1, 00:20:01.743 "base_bdevs_list": [ 00:20:01.743 { 00:20:01.743 "name": null, 00:20:01.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.743 "is_configured": false, 00:20:01.743 "data_offset": 0, 00:20:01.743 "data_size": 7936 00:20:01.743 }, 00:20:01.743 { 00:20:01.743 "name": "BaseBdev2", 00:20:01.743 "uuid": "6ea38a19-3e25-44f9-ac76-6696e057d7e2", 00:20:01.743 "is_configured": true, 00:20:01.743 "data_offset": 256, 00:20:01.743 "data_size": 7936 00:20:01.743 } 00:20:01.743 ] 00:20:01.743 }' 00:20:01.743 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.743 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.003 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.262 [2024-10-11 09:53:46.634347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:02.262 [2024-10-11 09:53:46.634461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.262 [2024-10-11 09:53:46.725334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.262 [2024-10-11 09:53:46.725508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.262 [2024-10-11 09:53:46.725527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89024 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89024 ']' 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89024 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89024 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89024' 00:20:02.262 killing process with pid 89024 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89024 00:20:02.262 [2024-10-11 09:53:46.820932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:02.262 09:53:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89024 00:20:02.262 [2024-10-11 09:53:46.836353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.641 09:53:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:03.641 00:20:03.641 real 0m5.081s 00:20:03.641 user 0m7.315s 00:20:03.641 sys 0m0.907s 00:20:03.641 09:53:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.641 09:53:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.641 ************************************ 00:20:03.641 END TEST raid_state_function_test_sb_md_interleaved 00:20:03.641 ************************************ 00:20:03.641 09:53:47 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:03.641 09:53:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:03.641 09:53:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:03.641 09:53:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.641 ************************************ 00:20:03.641 START TEST raid_superblock_test_md_interleaved 00:20:03.641 ************************************ 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89272 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89272 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89272 ']' 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.641 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.642 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.642 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.642 09:53:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.642 [2024-10-11 09:53:48.083771] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:03.642 [2024-10-11 09:53:48.083966] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89272 ] 00:20:03.642 [2024-10-11 09:53:48.248168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.901 [2024-10-11 09:53:48.373870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.160 [2024-10-11 09:53:48.600762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.160 [2024-10-11 09:53:48.600911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.420 09:53:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.420 malloc1 00:20:04.420 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.420 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:04.420 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.420 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.420 [2024-10-11 09:53:49.045306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:04.420 [2024-10-11 09:53:49.045364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.420 [2024-10-11 09:53:49.045384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:04.420 [2024-10-11 09:53:49.045394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.420 [2024-10-11 09:53:49.047388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.420 [2024-10-11 09:53:49.047430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:04.680 pt1 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.680 malloc2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.680 [2024-10-11 09:53:49.106210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:04.680 [2024-10-11 09:53:49.106319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.680 [2024-10-11 09:53:49.106359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:04.680 [2024-10-11 09:53:49.106421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.680 [2024-10-11 09:53:49.108397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.680 [2024-10-11 09:53:49.108473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:04.680 pt2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.680 [2024-10-11 09:53:49.118264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:04.680 [2024-10-11 09:53:49.120202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.680 [2024-10-11 09:53:49.120476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:04.680 [2024-10-11 09:53:49.120532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:04.680 [2024-10-11 09:53:49.120640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:04.680 [2024-10-11 09:53:49.120768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:04.680 [2024-10-11 09:53:49.120818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:04.680 [2024-10-11 09:53:49.120939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.680 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.681 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.681 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.681 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.681 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.681 "name": "raid_bdev1", 00:20:04.681 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:04.681 "strip_size_kb": 0, 00:20:04.681 "state": "online", 00:20:04.681 "raid_level": "raid1", 00:20:04.681 "superblock": true, 00:20:04.681 "num_base_bdevs": 2, 00:20:04.681 "num_base_bdevs_discovered": 2, 00:20:04.681 "num_base_bdevs_operational": 2, 00:20:04.681 "base_bdevs_list": [ 00:20:04.681 { 00:20:04.681 "name": "pt1", 00:20:04.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.681 "is_configured": true, 00:20:04.681 "data_offset": 256, 00:20:04.681 "data_size": 7936 00:20:04.681 }, 00:20:04.681 { 00:20:04.681 "name": "pt2", 00:20:04.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.681 "is_configured": true, 00:20:04.681 "data_offset": 256, 00:20:04.681 "data_size": 7936 00:20:04.681 } 00:20:04.681 ] 00:20:04.681 }' 00:20:04.681 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.681 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.249 [2024-10-11 09:53:49.597729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:05.249 "name": "raid_bdev1", 00:20:05.249 "aliases": [ 00:20:05.249 "b972c3dd-72fc-4af2-ae58-4daaeec71788" 00:20:05.249 ], 00:20:05.249 "product_name": "Raid Volume", 00:20:05.249 "block_size": 4128, 00:20:05.249 "num_blocks": 7936, 00:20:05.249 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:05.249 "md_size": 32, 00:20:05.249 "md_interleave": true, 00:20:05.249 "dif_type": 0, 00:20:05.249 "assigned_rate_limits": { 00:20:05.249 "rw_ios_per_sec": 0, 00:20:05.249 "rw_mbytes_per_sec": 0, 00:20:05.249 "r_mbytes_per_sec": 0, 00:20:05.249 "w_mbytes_per_sec": 0 00:20:05.249 }, 00:20:05.249 "claimed": false, 00:20:05.249 "zoned": false, 00:20:05.249 "supported_io_types": { 00:20:05.249 "read": true, 00:20:05.249 "write": true, 00:20:05.249 "unmap": false, 00:20:05.249 "flush": false, 00:20:05.249 "reset": true, 00:20:05.249 "nvme_admin": false, 00:20:05.249 "nvme_io": false, 00:20:05.249 "nvme_io_md": false, 00:20:05.249 "write_zeroes": true, 00:20:05.249 "zcopy": false, 00:20:05.249 "get_zone_info": false, 00:20:05.249 "zone_management": false, 00:20:05.249 "zone_append": false, 00:20:05.249 "compare": false, 00:20:05.249 "compare_and_write": false, 00:20:05.249 "abort": false, 00:20:05.249 "seek_hole": false, 00:20:05.249 "seek_data": false, 00:20:05.249 "copy": false, 00:20:05.249 "nvme_iov_md": false 00:20:05.249 }, 00:20:05.249 "memory_domains": [ 00:20:05.249 { 00:20:05.249 "dma_device_id": "system", 00:20:05.249 "dma_device_type": 1 00:20:05.249 }, 00:20:05.249 { 00:20:05.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.249 "dma_device_type": 2 00:20:05.249 }, 00:20:05.249 { 00:20:05.249 "dma_device_id": "system", 00:20:05.249 "dma_device_type": 1 00:20:05.249 }, 00:20:05.249 { 00:20:05.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.249 "dma_device_type": 2 00:20:05.249 } 00:20:05.249 ], 00:20:05.249 "driver_specific": { 00:20:05.249 "raid": { 00:20:05.249 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:05.249 "strip_size_kb": 0, 00:20:05.249 "state": "online", 00:20:05.249 "raid_level": "raid1", 00:20:05.249 "superblock": true, 00:20:05.249 "num_base_bdevs": 2, 00:20:05.249 "num_base_bdevs_discovered": 2, 00:20:05.249 "num_base_bdevs_operational": 2, 00:20:05.249 "base_bdevs_list": [ 00:20:05.249 { 00:20:05.249 "name": "pt1", 00:20:05.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:05.249 "is_configured": true, 00:20:05.249 "data_offset": 256, 00:20:05.249 "data_size": 7936 00:20:05.249 }, 00:20:05.249 { 00:20:05.249 "name": "pt2", 00:20:05.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.249 "is_configured": true, 00:20:05.249 "data_offset": 256, 00:20:05.249 "data_size": 7936 00:20:05.249 } 00:20:05.249 ] 00:20:05.249 } 00:20:05.249 } 00:20:05.249 }' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:05.249 pt2' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.249 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.250 [2024-10-11 09:53:49.817327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b972c3dd-72fc-4af2-ae58-4daaeec71788 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b972c3dd-72fc-4af2-ae58-4daaeec71788 ']' 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.250 [2024-10-11 09:53:49.864953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.250 [2024-10-11 09:53:49.864989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.250 [2024-10-11 09:53:49.865114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.250 [2024-10-11 09:53:49.865175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.250 [2024-10-11 09:53:49.865187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.250 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.509 09:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.509 [2024-10-11 09:53:50.036714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:05.509 [2024-10-11 09:53:50.038806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:05.509 [2024-10-11 09:53:50.038897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:05.509 [2024-10-11 09:53:50.038959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:05.509 [2024-10-11 09:53:50.038974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.509 [2024-10-11 09:53:50.038997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:05.509 request: 00:20:05.509 { 00:20:05.509 "name": "raid_bdev1", 00:20:05.509 "raid_level": "raid1", 00:20:05.509 "base_bdevs": [ 00:20:05.509 "malloc1", 00:20:05.509 "malloc2" 00:20:05.509 ], 00:20:05.509 "superblock": false, 00:20:05.509 "method": "bdev_raid_create", 00:20:05.509 "req_id": 1 00:20:05.509 } 00:20:05.509 Got JSON-RPC error response 00:20:05.509 response: 00:20:05.509 { 00:20:05.509 "code": -17, 00:20:05.509 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:05.509 } 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.509 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.510 [2024-10-11 09:53:50.104556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:05.510 [2024-10-11 09:53:50.104704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.510 [2024-10-11 09:53:50.104755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:05.510 [2024-10-11 09:53:50.104796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.510 [2024-10-11 09:53:50.106906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.510 [2024-10-11 09:53:50.106990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:05.510 [2024-10-11 09:53:50.107079] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:05.510 [2024-10-11 09:53:50.107197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:05.510 pt1 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.510 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.769 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.769 "name": "raid_bdev1", 00:20:05.769 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:05.769 "strip_size_kb": 0, 00:20:05.769 "state": "configuring", 00:20:05.769 "raid_level": "raid1", 00:20:05.769 "superblock": true, 00:20:05.769 "num_base_bdevs": 2, 00:20:05.769 "num_base_bdevs_discovered": 1, 00:20:05.769 "num_base_bdevs_operational": 2, 00:20:05.769 "base_bdevs_list": [ 00:20:05.769 { 00:20:05.769 "name": "pt1", 00:20:05.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:05.769 "is_configured": true, 00:20:05.769 "data_offset": 256, 00:20:05.769 "data_size": 7936 00:20:05.769 }, 00:20:05.769 { 00:20:05.769 "name": null, 00:20:05.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.769 "is_configured": false, 00:20:05.769 "data_offset": 256, 00:20:05.769 "data_size": 7936 00:20:05.769 } 00:20:05.769 ] 00:20:05.769 }' 00:20:05.769 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.769 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.045 [2024-10-11 09:53:50.591793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.045 [2024-10-11 09:53:50.591873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.045 [2024-10-11 09:53:50.591897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:06.045 [2024-10-11 09:53:50.591908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.045 [2024-10-11 09:53:50.592099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.045 [2024-10-11 09:53:50.592116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.045 [2024-10-11 09:53:50.592173] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:06.045 [2024-10-11 09:53:50.592195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.045 [2024-10-11 09:53:50.592283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:06.045 [2024-10-11 09:53:50.592293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:06.045 [2024-10-11 09:53:50.592373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:06.045 [2024-10-11 09:53:50.592437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:06.045 [2024-10-11 09:53:50.592444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:06.045 [2024-10-11 09:53:50.592511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.045 pt2 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.045 "name": "raid_bdev1", 00:20:06.045 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:06.045 "strip_size_kb": 0, 00:20:06.045 "state": "online", 00:20:06.045 "raid_level": "raid1", 00:20:06.045 "superblock": true, 00:20:06.045 "num_base_bdevs": 2, 00:20:06.045 "num_base_bdevs_discovered": 2, 00:20:06.045 "num_base_bdevs_operational": 2, 00:20:06.045 "base_bdevs_list": [ 00:20:06.045 { 00:20:06.045 "name": "pt1", 00:20:06.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.045 "is_configured": true, 00:20:06.045 "data_offset": 256, 00:20:06.045 "data_size": 7936 00:20:06.045 }, 00:20:06.045 { 00:20:06.045 "name": "pt2", 00:20:06.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.045 "is_configured": true, 00:20:06.045 "data_offset": 256, 00:20:06.045 "data_size": 7936 00:20:06.045 } 00:20:06.045 ] 00:20:06.045 }' 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.045 09:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.634 [2024-10-11 09:53:51.111196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.634 "name": "raid_bdev1", 00:20:06.634 "aliases": [ 00:20:06.634 "b972c3dd-72fc-4af2-ae58-4daaeec71788" 00:20:06.634 ], 00:20:06.634 "product_name": "Raid Volume", 00:20:06.634 "block_size": 4128, 00:20:06.634 "num_blocks": 7936, 00:20:06.634 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:06.634 "md_size": 32, 00:20:06.634 "md_interleave": true, 00:20:06.634 "dif_type": 0, 00:20:06.634 "assigned_rate_limits": { 00:20:06.634 "rw_ios_per_sec": 0, 00:20:06.634 "rw_mbytes_per_sec": 0, 00:20:06.634 "r_mbytes_per_sec": 0, 00:20:06.634 "w_mbytes_per_sec": 0 00:20:06.634 }, 00:20:06.634 "claimed": false, 00:20:06.634 "zoned": false, 00:20:06.634 "supported_io_types": { 00:20:06.634 "read": true, 00:20:06.634 "write": true, 00:20:06.634 "unmap": false, 00:20:06.634 "flush": false, 00:20:06.634 "reset": true, 00:20:06.634 "nvme_admin": false, 00:20:06.634 "nvme_io": false, 00:20:06.634 "nvme_io_md": false, 00:20:06.634 "write_zeroes": true, 00:20:06.634 "zcopy": false, 00:20:06.634 "get_zone_info": false, 00:20:06.634 "zone_management": false, 00:20:06.634 "zone_append": false, 00:20:06.634 "compare": false, 00:20:06.634 "compare_and_write": false, 00:20:06.634 "abort": false, 00:20:06.634 "seek_hole": false, 00:20:06.634 "seek_data": false, 00:20:06.634 "copy": false, 00:20:06.634 "nvme_iov_md": false 00:20:06.634 }, 00:20:06.634 "memory_domains": [ 00:20:06.634 { 00:20:06.634 "dma_device_id": "system", 00:20:06.634 "dma_device_type": 1 00:20:06.634 }, 00:20:06.634 { 00:20:06.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.634 "dma_device_type": 2 00:20:06.634 }, 00:20:06.634 { 00:20:06.634 "dma_device_id": "system", 00:20:06.634 "dma_device_type": 1 00:20:06.634 }, 00:20:06.634 { 00:20:06.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.634 "dma_device_type": 2 00:20:06.634 } 00:20:06.634 ], 00:20:06.634 "driver_specific": { 00:20:06.634 "raid": { 00:20:06.634 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:06.634 "strip_size_kb": 0, 00:20:06.634 "state": "online", 00:20:06.634 "raid_level": "raid1", 00:20:06.634 "superblock": true, 00:20:06.634 "num_base_bdevs": 2, 00:20:06.634 "num_base_bdevs_discovered": 2, 00:20:06.634 "num_base_bdevs_operational": 2, 00:20:06.634 "base_bdevs_list": [ 00:20:06.634 { 00:20:06.634 "name": "pt1", 00:20:06.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.634 "is_configured": true, 00:20:06.634 "data_offset": 256, 00:20:06.634 "data_size": 7936 00:20:06.634 }, 00:20:06.634 { 00:20:06.634 "name": "pt2", 00:20:06.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.634 "is_configured": true, 00:20:06.634 "data_offset": 256, 00:20:06.634 "data_size": 7936 00:20:06.634 } 00:20:06.634 ] 00:20:06.634 } 00:20:06.634 } 00:20:06.634 }' 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:06.634 pt2' 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.634 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:06.893 [2024-10-11 09:53:51.362769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b972c3dd-72fc-4af2-ae58-4daaeec71788 '!=' b972c3dd-72fc-4af2-ae58-4daaeec71788 ']' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 [2024-10-11 09:53:51.410438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.893 "name": "raid_bdev1", 00:20:06.893 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:06.893 "strip_size_kb": 0, 00:20:06.893 "state": "online", 00:20:06.893 "raid_level": "raid1", 00:20:06.893 "superblock": true, 00:20:06.893 "num_base_bdevs": 2, 00:20:06.893 "num_base_bdevs_discovered": 1, 00:20:06.893 "num_base_bdevs_operational": 1, 00:20:06.893 "base_bdevs_list": [ 00:20:06.893 { 00:20:06.893 "name": null, 00:20:06.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.893 "is_configured": false, 00:20:06.893 "data_offset": 0, 00:20:06.893 "data_size": 7936 00:20:06.893 }, 00:20:06.893 { 00:20:06.893 "name": "pt2", 00:20:06.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.893 "is_configured": true, 00:20:06.893 "data_offset": 256, 00:20:06.893 "data_size": 7936 00:20:06.893 } 00:20:06.893 ] 00:20:06.893 }' 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.893 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 [2024-10-11 09:53:51.857642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.463 [2024-10-11 09:53:51.857775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.463 [2024-10-11 09:53:51.857893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.463 [2024-10-11 09:53:51.857976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.463 [2024-10-11 09:53:51.858025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 [2024-10-11 09:53:51.929503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:07.463 [2024-10-11 09:53:51.929605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.463 [2024-10-11 09:53:51.929639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:07.463 [2024-10-11 09:53:51.929674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.463 [2024-10-11 09:53:51.931665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.463 [2024-10-11 09:53:51.931779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:07.463 [2024-10-11 09:53:51.931860] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:07.463 [2024-10-11 09:53:51.931952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.463 [2024-10-11 09:53:51.932046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:07.463 [2024-10-11 09:53:51.932083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:07.463 [2024-10-11 09:53:51.932193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:07.463 [2024-10-11 09:53:51.932315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:07.463 [2024-10-11 09:53:51.932356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:07.463 [2024-10-11 09:53:51.932469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.463 pt2 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.463 "name": "raid_bdev1", 00:20:07.463 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:07.463 "strip_size_kb": 0, 00:20:07.463 "state": "online", 00:20:07.463 "raid_level": "raid1", 00:20:07.463 "superblock": true, 00:20:07.463 "num_base_bdevs": 2, 00:20:07.463 "num_base_bdevs_discovered": 1, 00:20:07.463 "num_base_bdevs_operational": 1, 00:20:07.463 "base_bdevs_list": [ 00:20:07.463 { 00:20:07.463 "name": null, 00:20:07.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.463 "is_configured": false, 00:20:07.463 "data_offset": 256, 00:20:07.463 "data_size": 7936 00:20:07.463 }, 00:20:07.463 { 00:20:07.463 "name": "pt2", 00:20:07.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.463 "is_configured": true, 00:20:07.463 "data_offset": 256, 00:20:07.463 "data_size": 7936 00:20:07.463 } 00:20:07.463 ] 00:20:07.463 }' 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.463 09:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 [2024-10-11 09:53:52.440626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.033 [2024-10-11 09:53:52.440663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.033 [2024-10-11 09:53:52.440761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.033 [2024-10-11 09:53:52.440815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.033 [2024-10-11 09:53:52.440824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.033 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.033 [2024-10-11 09:53:52.504542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.034 [2024-10-11 09:53:52.504678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.034 [2024-10-11 09:53:52.504706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:08.034 [2024-10-11 09:53:52.504716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.034 [2024-10-11 09:53:52.506727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.034 [2024-10-11 09:53:52.506800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.034 [2024-10-11 09:53:52.506869] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:08.034 [2024-10-11 09:53:52.506920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.034 [2024-10-11 09:53:52.507034] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:08.034 [2024-10-11 09:53:52.507044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.034 [2024-10-11 09:53:52.507066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:08.034 [2024-10-11 09:53:52.507154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.034 [2024-10-11 09:53:52.507257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:08.034 [2024-10-11 09:53:52.507267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:08.034 [2024-10-11 09:53:52.507338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:08.034 [2024-10-11 09:53:52.507419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:08.034 [2024-10-11 09:53:52.507432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:08.034 [2024-10-11 09:53:52.507514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.034 pt1 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.034 "name": "raid_bdev1", 00:20:08.034 "uuid": "b972c3dd-72fc-4af2-ae58-4daaeec71788", 00:20:08.034 "strip_size_kb": 0, 00:20:08.034 "state": "online", 00:20:08.034 "raid_level": "raid1", 00:20:08.034 "superblock": true, 00:20:08.034 "num_base_bdevs": 2, 00:20:08.034 "num_base_bdevs_discovered": 1, 00:20:08.034 "num_base_bdevs_operational": 1, 00:20:08.034 "base_bdevs_list": [ 00:20:08.034 { 00:20:08.034 "name": null, 00:20:08.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.034 "is_configured": false, 00:20:08.034 "data_offset": 256, 00:20:08.034 "data_size": 7936 00:20:08.034 }, 00:20:08.034 { 00:20:08.034 "name": "pt2", 00:20:08.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.034 "is_configured": true, 00:20:08.034 "data_offset": 256, 00:20:08.034 "data_size": 7936 00:20:08.034 } 00:20:08.034 ] 00:20:08.034 }' 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.034 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.604 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:08.604 09:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.604 [2024-10-11 09:53:53.059949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b972c3dd-72fc-4af2-ae58-4daaeec71788 '!=' b972c3dd-72fc-4af2-ae58-4daaeec71788 ']' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89272 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89272 ']' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89272 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89272 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:08.604 killing process with pid 89272 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89272' 00:20:08.604 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89272 00:20:08.604 [2024-10-11 09:53:53.127732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.604 [2024-10-11 09:53:53.127849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.605 [2024-10-11 09:53:53.127903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.605 [2024-10-11 09:53:53.127919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:08.605 09:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89272 00:20:08.864 [2024-10-11 09:53:53.329902] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.804 09:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:09.804 00:20:09.804 real 0m6.443s 00:20:09.804 user 0m9.788s 00:20:09.804 sys 0m1.242s 00:20:09.804 09:53:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.804 ************************************ 00:20:09.804 END TEST raid_superblock_test_md_interleaved 00:20:09.804 ************************************ 00:20:09.804 09:53:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.063 09:53:54 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:10.063 09:53:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:10.063 09:53:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.063 09:53:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:10.063 ************************************ 00:20:10.063 START TEST raid_rebuild_test_sb_md_interleaved 00:20:10.063 ************************************ 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:10.063 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89605 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89605 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89605 ']' 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.064 09:53:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.064 [2024-10-11 09:53:54.607943] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:10.064 [2024-10-11 09:53:54.608204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:10.064 Zero copy mechanism will not be used. 00:20:10.064 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89605 ] 00:20:10.323 [2024-10-11 09:53:54.770171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.323 [2024-10-11 09:53:54.892686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.582 [2024-10-11 09:53:55.106217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.582 [2024-10-11 09:53:55.106326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.841 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.841 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:20:10.842 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:10.842 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:10.842 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.842 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 BaseBdev1_malloc 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 [2024-10-11 09:53:55.506187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:11.101 [2024-10-11 09:53:55.506275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.101 [2024-10-11 09:53:55.506312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:11.101 [2024-10-11 09:53:55.506331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.101 [2024-10-11 09:53:55.508672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.101 [2024-10-11 09:53:55.508730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:11.101 BaseBdev1 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 BaseBdev2_malloc 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 [2024-10-11 09:53:55.561399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:11.101 [2024-10-11 09:53:55.561477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.101 [2024-10-11 09:53:55.561498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:11.101 [2024-10-11 09:53:55.561509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.101 [2024-10-11 09:53:55.563301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.101 [2024-10-11 09:53:55.563341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:11.101 BaseBdev2 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 spare_malloc 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 spare_delay 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 [2024-10-11 09:53:55.642267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:11.101 [2024-10-11 09:53:55.642325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.101 [2024-10-11 09:53:55.642360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:11.101 [2024-10-11 09:53:55.642370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.101 [2024-10-11 09:53:55.644210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.101 [2024-10-11 09:53:55.644287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:11.101 spare 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 [2024-10-11 09:53:55.654298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.101 [2024-10-11 09:53:55.656126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.101 [2024-10-11 09:53:55.656362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:11.101 [2024-10-11 09:53:55.656415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:11.101 [2024-10-11 09:53:55.656513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:11.101 [2024-10-11 09:53:55.656620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:11.101 [2024-10-11 09:53:55.656655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:11.101 [2024-10-11 09:53:55.656772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.101 "name": "raid_bdev1", 00:20:11.101 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:11.101 "strip_size_kb": 0, 00:20:11.101 "state": "online", 00:20:11.101 "raid_level": "raid1", 00:20:11.101 "superblock": true, 00:20:11.101 "num_base_bdevs": 2, 00:20:11.101 "num_base_bdevs_discovered": 2, 00:20:11.101 "num_base_bdevs_operational": 2, 00:20:11.101 "base_bdevs_list": [ 00:20:11.101 { 00:20:11.101 "name": "BaseBdev1", 00:20:11.101 "uuid": "f946f988-4210-5cfc-894e-e219490c8bdf", 00:20:11.101 "is_configured": true, 00:20:11.101 "data_offset": 256, 00:20:11.101 "data_size": 7936 00:20:11.101 }, 00:20:11.101 { 00:20:11.101 "name": "BaseBdev2", 00:20:11.101 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:11.101 "is_configured": true, 00:20:11.101 "data_offset": 256, 00:20:11.101 "data_size": 7936 00:20:11.101 } 00:20:11.101 ] 00:20:11.101 }' 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.101 09:53:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.674 [2024-10-11 09:53:56.149766] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.674 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.675 [2024-10-11 09:53:56.209319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.675 "name": "raid_bdev1", 00:20:11.675 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:11.675 "strip_size_kb": 0, 00:20:11.675 "state": "online", 00:20:11.675 "raid_level": "raid1", 00:20:11.675 "superblock": true, 00:20:11.675 "num_base_bdevs": 2, 00:20:11.675 "num_base_bdevs_discovered": 1, 00:20:11.675 "num_base_bdevs_operational": 1, 00:20:11.675 "base_bdevs_list": [ 00:20:11.675 { 00:20:11.675 "name": null, 00:20:11.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.675 "is_configured": false, 00:20:11.675 "data_offset": 0, 00:20:11.675 "data_size": 7936 00:20:11.675 }, 00:20:11.675 { 00:20:11.675 "name": "BaseBdev2", 00:20:11.675 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:11.675 "is_configured": true, 00:20:11.675 "data_offset": 256, 00:20:11.675 "data_size": 7936 00:20:11.675 } 00:20:11.675 ] 00:20:11.675 }' 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.675 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.244 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:12.244 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.244 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.244 [2024-10-11 09:53:56.640671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.244 [2024-10-11 09:53:56.658572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:12.244 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.244 09:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:12.244 [2024-10-11 09:53:56.660625] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.184 "name": "raid_bdev1", 00:20:13.184 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:13.184 "strip_size_kb": 0, 00:20:13.184 "state": "online", 00:20:13.184 "raid_level": "raid1", 00:20:13.184 "superblock": true, 00:20:13.184 "num_base_bdevs": 2, 00:20:13.184 "num_base_bdevs_discovered": 2, 00:20:13.184 "num_base_bdevs_operational": 2, 00:20:13.184 "process": { 00:20:13.184 "type": "rebuild", 00:20:13.184 "target": "spare", 00:20:13.184 "progress": { 00:20:13.184 "blocks": 2560, 00:20:13.184 "percent": 32 00:20:13.184 } 00:20:13.184 }, 00:20:13.184 "base_bdevs_list": [ 00:20:13.184 { 00:20:13.184 "name": "spare", 00:20:13.184 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:13.184 "is_configured": true, 00:20:13.184 "data_offset": 256, 00:20:13.184 "data_size": 7936 00:20:13.184 }, 00:20:13.184 { 00:20:13.184 "name": "BaseBdev2", 00:20:13.184 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:13.184 "is_configured": true, 00:20:13.184 "data_offset": 256, 00:20:13.184 "data_size": 7936 00:20:13.184 } 00:20:13.184 ] 00:20:13.184 }' 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.184 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.444 [2024-10-11 09:53:57.824362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.444 [2024-10-11 09:53:57.866783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.444 [2024-10-11 09:53:57.866885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.444 [2024-10-11 09:53:57.866904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.444 [2024-10-11 09:53:57.866913] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.444 "name": "raid_bdev1", 00:20:13.444 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:13.444 "strip_size_kb": 0, 00:20:13.444 "state": "online", 00:20:13.444 "raid_level": "raid1", 00:20:13.444 "superblock": true, 00:20:13.444 "num_base_bdevs": 2, 00:20:13.444 "num_base_bdevs_discovered": 1, 00:20:13.444 "num_base_bdevs_operational": 1, 00:20:13.444 "base_bdevs_list": [ 00:20:13.444 { 00:20:13.444 "name": null, 00:20:13.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.444 "is_configured": false, 00:20:13.444 "data_offset": 0, 00:20:13.444 "data_size": 7936 00:20:13.444 }, 00:20:13.444 { 00:20:13.444 "name": "BaseBdev2", 00:20:13.444 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:13.444 "is_configured": true, 00:20:13.444 "data_offset": 256, 00:20:13.444 "data_size": 7936 00:20:13.444 } 00:20:13.444 ] 00:20:13.444 }' 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.444 09:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.703 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.963 "name": "raid_bdev1", 00:20:13.963 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:13.963 "strip_size_kb": 0, 00:20:13.963 "state": "online", 00:20:13.963 "raid_level": "raid1", 00:20:13.963 "superblock": true, 00:20:13.963 "num_base_bdevs": 2, 00:20:13.963 "num_base_bdevs_discovered": 1, 00:20:13.963 "num_base_bdevs_operational": 1, 00:20:13.963 "base_bdevs_list": [ 00:20:13.963 { 00:20:13.963 "name": null, 00:20:13.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.963 "is_configured": false, 00:20:13.963 "data_offset": 0, 00:20:13.963 "data_size": 7936 00:20:13.963 }, 00:20:13.963 { 00:20:13.963 "name": "BaseBdev2", 00:20:13.963 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:13.963 "is_configured": true, 00:20:13.963 "data_offset": 256, 00:20:13.963 "data_size": 7936 00:20:13.963 } 00:20:13.963 ] 00:20:13.963 }' 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.963 [2024-10-11 09:53:58.468209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.963 [2024-10-11 09:53:58.485005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.963 [2024-10-11 09:53:58.486871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.963 09:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.902 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.162 "name": "raid_bdev1", 00:20:15.162 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:15.162 "strip_size_kb": 0, 00:20:15.162 "state": "online", 00:20:15.162 "raid_level": "raid1", 00:20:15.162 "superblock": true, 00:20:15.162 "num_base_bdevs": 2, 00:20:15.162 "num_base_bdevs_discovered": 2, 00:20:15.162 "num_base_bdevs_operational": 2, 00:20:15.162 "process": { 00:20:15.162 "type": "rebuild", 00:20:15.162 "target": "spare", 00:20:15.162 "progress": { 00:20:15.162 "blocks": 2560, 00:20:15.162 "percent": 32 00:20:15.162 } 00:20:15.162 }, 00:20:15.162 "base_bdevs_list": [ 00:20:15.162 { 00:20:15.162 "name": "spare", 00:20:15.162 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:15.162 "is_configured": true, 00:20:15.162 "data_offset": 256, 00:20:15.162 "data_size": 7936 00:20:15.162 }, 00:20:15.162 { 00:20:15.162 "name": "BaseBdev2", 00:20:15.162 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:15.162 "is_configured": true, 00:20:15.162 "data_offset": 256, 00:20:15.162 "data_size": 7936 00:20:15.162 } 00:20:15.162 ] 00:20:15.162 }' 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:15.162 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=755 00:20:15.162 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.163 "name": "raid_bdev1", 00:20:15.163 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:15.163 "strip_size_kb": 0, 00:20:15.163 "state": "online", 00:20:15.163 "raid_level": "raid1", 00:20:15.163 "superblock": true, 00:20:15.163 "num_base_bdevs": 2, 00:20:15.163 "num_base_bdevs_discovered": 2, 00:20:15.163 "num_base_bdevs_operational": 2, 00:20:15.163 "process": { 00:20:15.163 "type": "rebuild", 00:20:15.163 "target": "spare", 00:20:15.163 "progress": { 00:20:15.163 "blocks": 2816, 00:20:15.163 "percent": 35 00:20:15.163 } 00:20:15.163 }, 00:20:15.163 "base_bdevs_list": [ 00:20:15.163 { 00:20:15.163 "name": "spare", 00:20:15.163 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:15.163 "is_configured": true, 00:20:15.163 "data_offset": 256, 00:20:15.163 "data_size": 7936 00:20:15.163 }, 00:20:15.163 { 00:20:15.163 "name": "BaseBdev2", 00:20:15.163 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:15.163 "is_configured": true, 00:20:15.163 "data_offset": 256, 00:20:15.163 "data_size": 7936 00:20:15.163 } 00:20:15.163 ] 00:20:15.163 }' 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.163 09:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.543 "name": "raid_bdev1", 00:20:16.543 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:16.543 "strip_size_kb": 0, 00:20:16.543 "state": "online", 00:20:16.543 "raid_level": "raid1", 00:20:16.543 "superblock": true, 00:20:16.543 "num_base_bdevs": 2, 00:20:16.543 "num_base_bdevs_discovered": 2, 00:20:16.543 "num_base_bdevs_operational": 2, 00:20:16.543 "process": { 00:20:16.543 "type": "rebuild", 00:20:16.543 "target": "spare", 00:20:16.543 "progress": { 00:20:16.543 "blocks": 5632, 00:20:16.543 "percent": 70 00:20:16.543 } 00:20:16.543 }, 00:20:16.543 "base_bdevs_list": [ 00:20:16.543 { 00:20:16.543 "name": "spare", 00:20:16.543 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:16.543 "is_configured": true, 00:20:16.543 "data_offset": 256, 00:20:16.543 "data_size": 7936 00:20:16.543 }, 00:20:16.543 { 00:20:16.543 "name": "BaseBdev2", 00:20:16.543 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:16.543 "is_configured": true, 00:20:16.543 "data_offset": 256, 00:20:16.543 "data_size": 7936 00:20:16.543 } 00:20:16.543 ] 00:20:16.543 }' 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.543 09:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.113 [2024-10-11 09:54:01.602310] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:17.113 [2024-10-11 09:54:01.602491] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:17.113 [2024-10-11 09:54:01.602649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.371 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.372 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.372 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.372 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.372 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.372 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.372 "name": "raid_bdev1", 00:20:17.372 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:17.372 "strip_size_kb": 0, 00:20:17.372 "state": "online", 00:20:17.372 "raid_level": "raid1", 00:20:17.372 "superblock": true, 00:20:17.372 "num_base_bdevs": 2, 00:20:17.372 "num_base_bdevs_discovered": 2, 00:20:17.372 "num_base_bdevs_operational": 2, 00:20:17.372 "base_bdevs_list": [ 00:20:17.372 { 00:20:17.372 "name": "spare", 00:20:17.372 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:17.372 "is_configured": true, 00:20:17.372 "data_offset": 256, 00:20:17.372 "data_size": 7936 00:20:17.372 }, 00:20:17.372 { 00:20:17.372 "name": "BaseBdev2", 00:20:17.372 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:17.372 "is_configured": true, 00:20:17.372 "data_offset": 256, 00:20:17.372 "data_size": 7936 00:20:17.372 } 00:20:17.372 ] 00:20:17.372 }' 00:20:17.372 09:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.632 "name": "raid_bdev1", 00:20:17.632 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:17.632 "strip_size_kb": 0, 00:20:17.632 "state": "online", 00:20:17.632 "raid_level": "raid1", 00:20:17.632 "superblock": true, 00:20:17.632 "num_base_bdevs": 2, 00:20:17.632 "num_base_bdevs_discovered": 2, 00:20:17.632 "num_base_bdevs_operational": 2, 00:20:17.632 "base_bdevs_list": [ 00:20:17.632 { 00:20:17.632 "name": "spare", 00:20:17.632 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:17.632 "is_configured": true, 00:20:17.632 "data_offset": 256, 00:20:17.632 "data_size": 7936 00:20:17.632 }, 00:20:17.632 { 00:20:17.632 "name": "BaseBdev2", 00:20:17.632 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:17.632 "is_configured": true, 00:20:17.632 "data_offset": 256, 00:20:17.632 "data_size": 7936 00:20:17.632 } 00:20:17.632 ] 00:20:17.632 }' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.632 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.891 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.891 "name": "raid_bdev1", 00:20:17.891 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:17.891 "strip_size_kb": 0, 00:20:17.891 "state": "online", 00:20:17.891 "raid_level": "raid1", 00:20:17.891 "superblock": true, 00:20:17.891 "num_base_bdevs": 2, 00:20:17.891 "num_base_bdevs_discovered": 2, 00:20:17.891 "num_base_bdevs_operational": 2, 00:20:17.891 "base_bdevs_list": [ 00:20:17.891 { 00:20:17.891 "name": "spare", 00:20:17.891 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:17.891 "is_configured": true, 00:20:17.891 "data_offset": 256, 00:20:17.891 "data_size": 7936 00:20:17.891 }, 00:20:17.891 { 00:20:17.891 "name": "BaseBdev2", 00:20:17.891 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:17.891 "is_configured": true, 00:20:17.891 "data_offset": 256, 00:20:17.891 "data_size": 7936 00:20:17.891 } 00:20:17.891 ] 00:20:17.891 }' 00:20:17.891 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.891 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.151 [2024-10-11 09:54:02.649448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:18.151 [2024-10-11 09:54:02.649591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.151 [2024-10-11 09:54:02.649694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.151 [2024-10-11 09:54:02.649779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.151 [2024-10-11 09:54:02.649796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.151 [2024-10-11 09:54:02.721295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:18.151 [2024-10-11 09:54:02.721372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.151 [2024-10-11 09:54:02.721397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:18.151 [2024-10-11 09:54:02.721406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.151 [2024-10-11 09:54:02.723345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.151 [2024-10-11 09:54:02.723436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:18.151 [2024-10-11 09:54:02.723507] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:18.151 [2024-10-11 09:54:02.723561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.151 [2024-10-11 09:54:02.723671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.151 spare 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.151 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.413 [2024-10-11 09:54:02.823624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:18.413 [2024-10-11 09:54:02.823798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:18.413 [2024-10-11 09:54:02.823942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:18.413 [2024-10-11 09:54:02.824060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:18.413 [2024-10-11 09:54:02.824070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:18.413 [2024-10-11 09:54:02.824179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.413 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.413 "name": "raid_bdev1", 00:20:18.413 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:18.413 "strip_size_kb": 0, 00:20:18.413 "state": "online", 00:20:18.413 "raid_level": "raid1", 00:20:18.413 "superblock": true, 00:20:18.413 "num_base_bdevs": 2, 00:20:18.413 "num_base_bdevs_discovered": 2, 00:20:18.413 "num_base_bdevs_operational": 2, 00:20:18.413 "base_bdevs_list": [ 00:20:18.413 { 00:20:18.413 "name": "spare", 00:20:18.413 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:18.413 "is_configured": true, 00:20:18.413 "data_offset": 256, 00:20:18.413 "data_size": 7936 00:20:18.413 }, 00:20:18.413 { 00:20:18.413 "name": "BaseBdev2", 00:20:18.413 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:18.413 "is_configured": true, 00:20:18.413 "data_offset": 256, 00:20:18.413 "data_size": 7936 00:20:18.413 } 00:20:18.413 ] 00:20:18.413 }' 00:20:18.414 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.414 09:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.686 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.959 "name": "raid_bdev1", 00:20:18.959 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:18.959 "strip_size_kb": 0, 00:20:18.959 "state": "online", 00:20:18.959 "raid_level": "raid1", 00:20:18.959 "superblock": true, 00:20:18.959 "num_base_bdevs": 2, 00:20:18.959 "num_base_bdevs_discovered": 2, 00:20:18.959 "num_base_bdevs_operational": 2, 00:20:18.959 "base_bdevs_list": [ 00:20:18.959 { 00:20:18.959 "name": "spare", 00:20:18.959 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:18.959 "is_configured": true, 00:20:18.959 "data_offset": 256, 00:20:18.959 "data_size": 7936 00:20:18.959 }, 00:20:18.959 { 00:20:18.959 "name": "BaseBdev2", 00:20:18.959 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:18.959 "is_configured": true, 00:20:18.959 "data_offset": 256, 00:20:18.959 "data_size": 7936 00:20:18.959 } 00:20:18.959 ] 00:20:18.959 }' 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.959 [2024-10-11 09:54:03.456119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.959 "name": "raid_bdev1", 00:20:18.959 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:18.959 "strip_size_kb": 0, 00:20:18.959 "state": "online", 00:20:18.959 "raid_level": "raid1", 00:20:18.959 "superblock": true, 00:20:18.959 "num_base_bdevs": 2, 00:20:18.959 "num_base_bdevs_discovered": 1, 00:20:18.959 "num_base_bdevs_operational": 1, 00:20:18.959 "base_bdevs_list": [ 00:20:18.959 { 00:20:18.959 "name": null, 00:20:18.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.959 "is_configured": false, 00:20:18.959 "data_offset": 0, 00:20:18.959 "data_size": 7936 00:20:18.959 }, 00:20:18.959 { 00:20:18.959 "name": "BaseBdev2", 00:20:18.959 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:18.959 "is_configured": true, 00:20:18.959 "data_offset": 256, 00:20:18.959 "data_size": 7936 00:20:18.959 } 00:20:18.959 ] 00:20:18.959 }' 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.959 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.219 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.219 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.219 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.219 [2024-10-11 09:54:03.847927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.219 [2024-10-11 09:54:03.848160] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.219 [2024-10-11 09:54:03.848179] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:19.219 [2024-10-11 09:54:03.848225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.477 [2024-10-11 09:54:03.865690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:19.477 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.477 09:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:19.477 [2024-10-11 09:54:03.867609] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.415 "name": "raid_bdev1", 00:20:20.415 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:20.415 "strip_size_kb": 0, 00:20:20.415 "state": "online", 00:20:20.415 "raid_level": "raid1", 00:20:20.415 "superblock": true, 00:20:20.415 "num_base_bdevs": 2, 00:20:20.415 "num_base_bdevs_discovered": 2, 00:20:20.415 "num_base_bdevs_operational": 2, 00:20:20.415 "process": { 00:20:20.415 "type": "rebuild", 00:20:20.415 "target": "spare", 00:20:20.415 "progress": { 00:20:20.415 "blocks": 2560, 00:20:20.415 "percent": 32 00:20:20.415 } 00:20:20.415 }, 00:20:20.415 "base_bdevs_list": [ 00:20:20.415 { 00:20:20.415 "name": "spare", 00:20:20.415 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:20.415 "is_configured": true, 00:20:20.415 "data_offset": 256, 00:20:20.415 "data_size": 7936 00:20:20.415 }, 00:20:20.415 { 00:20:20.415 "name": "BaseBdev2", 00:20:20.415 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:20.415 "is_configured": true, 00:20:20.415 "data_offset": 256, 00:20:20.415 "data_size": 7936 00:20:20.415 } 00:20:20.415 ] 00:20:20.415 }' 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.415 09:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.415 [2024-10-11 09:54:04.995926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:20.675 [2024-10-11 09:54:05.073841] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:20.675 [2024-10-11 09:54:05.074025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.675 [2024-10-11 09:54:05.074062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:20.675 [2024-10-11 09:54:05.074087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.675 "name": "raid_bdev1", 00:20:20.675 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:20.675 "strip_size_kb": 0, 00:20:20.675 "state": "online", 00:20:20.675 "raid_level": "raid1", 00:20:20.675 "superblock": true, 00:20:20.675 "num_base_bdevs": 2, 00:20:20.675 "num_base_bdevs_discovered": 1, 00:20:20.675 "num_base_bdevs_operational": 1, 00:20:20.675 "base_bdevs_list": [ 00:20:20.675 { 00:20:20.675 "name": null, 00:20:20.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.675 "is_configured": false, 00:20:20.675 "data_offset": 0, 00:20:20.675 "data_size": 7936 00:20:20.675 }, 00:20:20.675 { 00:20:20.675 "name": "BaseBdev2", 00:20:20.675 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:20.675 "is_configured": true, 00:20:20.675 "data_offset": 256, 00:20:20.675 "data_size": 7936 00:20:20.675 } 00:20:20.675 ] 00:20:20.675 }' 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.675 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.244 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.244 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.244 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.244 [2024-10-11 09:54:05.610394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.244 [2024-10-11 09:54:05.610547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.244 [2024-10-11 09:54:05.610591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:21.244 [2024-10-11 09:54:05.610623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.244 [2024-10-11 09:54:05.610868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.244 [2024-10-11 09:54:05.610923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.244 [2024-10-11 09:54:05.611015] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:21.244 [2024-10-11 09:54:05.611033] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.244 [2024-10-11 09:54:05.611043] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:21.244 [2024-10-11 09:54:05.611072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.244 [2024-10-11 09:54:05.628205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:21.244 spare 00:20:21.244 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.244 [2024-10-11 09:54:05.630122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.244 09:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.181 "name": "raid_bdev1", 00:20:22.181 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:22.181 "strip_size_kb": 0, 00:20:22.181 "state": "online", 00:20:22.181 "raid_level": "raid1", 00:20:22.181 "superblock": true, 00:20:22.181 "num_base_bdevs": 2, 00:20:22.181 "num_base_bdevs_discovered": 2, 00:20:22.181 "num_base_bdevs_operational": 2, 00:20:22.181 "process": { 00:20:22.181 "type": "rebuild", 00:20:22.181 "target": "spare", 00:20:22.181 "progress": { 00:20:22.181 "blocks": 2560, 00:20:22.181 "percent": 32 00:20:22.181 } 00:20:22.181 }, 00:20:22.181 "base_bdevs_list": [ 00:20:22.181 { 00:20:22.181 "name": "spare", 00:20:22.181 "uuid": "8098a04e-5841-5139-9904-065ef888e953", 00:20:22.181 "is_configured": true, 00:20:22.181 "data_offset": 256, 00:20:22.181 "data_size": 7936 00:20:22.181 }, 00:20:22.181 { 00:20:22.181 "name": "BaseBdev2", 00:20:22.181 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:22.181 "is_configured": true, 00:20:22.181 "data_offset": 256, 00:20:22.181 "data_size": 7936 00:20:22.181 } 00:20:22.181 ] 00:20:22.181 }' 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.181 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.181 [2024-10-11 09:54:06.754400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.441 [2024-10-11 09:54:06.836055] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:22.441 [2024-10-11 09:54:06.836125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.441 [2024-10-11 09:54:06.836160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.441 [2024-10-11 09:54:06.836167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.441 "name": "raid_bdev1", 00:20:22.441 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:22.441 "strip_size_kb": 0, 00:20:22.441 "state": "online", 00:20:22.441 "raid_level": "raid1", 00:20:22.441 "superblock": true, 00:20:22.441 "num_base_bdevs": 2, 00:20:22.441 "num_base_bdevs_discovered": 1, 00:20:22.441 "num_base_bdevs_operational": 1, 00:20:22.441 "base_bdevs_list": [ 00:20:22.441 { 00:20:22.441 "name": null, 00:20:22.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.441 "is_configured": false, 00:20:22.441 "data_offset": 0, 00:20:22.441 "data_size": 7936 00:20:22.441 }, 00:20:22.441 { 00:20:22.441 "name": "BaseBdev2", 00:20:22.441 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:22.441 "is_configured": true, 00:20:22.441 "data_offset": 256, 00:20:22.441 "data_size": 7936 00:20:22.441 } 00:20:22.441 ] 00:20:22.441 }' 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.441 09:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.700 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.700 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.700 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:22.700 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:22.700 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.959 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.959 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.959 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.959 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.959 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.959 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.959 "name": "raid_bdev1", 00:20:22.959 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:22.959 "strip_size_kb": 0, 00:20:22.959 "state": "online", 00:20:22.959 "raid_level": "raid1", 00:20:22.959 "superblock": true, 00:20:22.959 "num_base_bdevs": 2, 00:20:22.959 "num_base_bdevs_discovered": 1, 00:20:22.959 "num_base_bdevs_operational": 1, 00:20:22.959 "base_bdevs_list": [ 00:20:22.959 { 00:20:22.959 "name": null, 00:20:22.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.960 "is_configured": false, 00:20:22.960 "data_offset": 0, 00:20:22.960 "data_size": 7936 00:20:22.960 }, 00:20:22.960 { 00:20:22.960 "name": "BaseBdev2", 00:20:22.960 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:22.960 "is_configured": true, 00:20:22.960 "data_offset": 256, 00:20:22.960 "data_size": 7936 00:20:22.960 } 00:20:22.960 ] 00:20:22.960 }' 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.960 [2024-10-11 09:54:07.475944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:22.960 [2024-10-11 09:54:07.476020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.960 [2024-10-11 09:54:07.476045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:22.960 [2024-10-11 09:54:07.476055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.960 [2024-10-11 09:54:07.476226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.960 [2024-10-11 09:54:07.476238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.960 [2024-10-11 09:54:07.476289] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:22.960 [2024-10-11 09:54:07.476302] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:22.960 [2024-10-11 09:54:07.476311] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:22.960 [2024-10-11 09:54:07.476323] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:22.960 BaseBdev1 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.960 09:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.898 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.157 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.157 "name": "raid_bdev1", 00:20:24.157 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:24.157 "strip_size_kb": 0, 00:20:24.157 "state": "online", 00:20:24.157 "raid_level": "raid1", 00:20:24.157 "superblock": true, 00:20:24.157 "num_base_bdevs": 2, 00:20:24.157 "num_base_bdevs_discovered": 1, 00:20:24.157 "num_base_bdevs_operational": 1, 00:20:24.157 "base_bdevs_list": [ 00:20:24.157 { 00:20:24.157 "name": null, 00:20:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.157 "is_configured": false, 00:20:24.157 "data_offset": 0, 00:20:24.157 "data_size": 7936 00:20:24.157 }, 00:20:24.157 { 00:20:24.157 "name": "BaseBdev2", 00:20:24.157 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:24.157 "is_configured": true, 00:20:24.157 "data_offset": 256, 00:20:24.157 "data_size": 7936 00:20:24.157 } 00:20:24.157 ] 00:20:24.157 }' 00:20:24.157 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.157 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.418 "name": "raid_bdev1", 00:20:24.418 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:24.418 "strip_size_kb": 0, 00:20:24.418 "state": "online", 00:20:24.418 "raid_level": "raid1", 00:20:24.418 "superblock": true, 00:20:24.418 "num_base_bdevs": 2, 00:20:24.418 "num_base_bdevs_discovered": 1, 00:20:24.418 "num_base_bdevs_operational": 1, 00:20:24.418 "base_bdevs_list": [ 00:20:24.418 { 00:20:24.418 "name": null, 00:20:24.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.418 "is_configured": false, 00:20:24.418 "data_offset": 0, 00:20:24.418 "data_size": 7936 00:20:24.418 }, 00:20:24.418 { 00:20:24.418 "name": "BaseBdev2", 00:20:24.418 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:24.418 "is_configured": true, 00:20:24.418 "data_offset": 256, 00:20:24.418 "data_size": 7936 00:20:24.418 } 00:20:24.418 ] 00:20:24.418 }' 00:20:24.418 09:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.418 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.418 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.418 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 [2024-10-11 09:54:09.057904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.678 [2024-10-11 09:54:09.058081] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:24.678 [2024-10-11 09:54:09.058099] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:24.678 request: 00:20:24.678 { 00:20:24.678 "base_bdev": "BaseBdev1", 00:20:24.678 "raid_bdev": "raid_bdev1", 00:20:24.678 "method": "bdev_raid_add_base_bdev", 00:20:24.678 "req_id": 1 00:20:24.678 } 00:20:24.678 Got JSON-RPC error response 00:20:24.678 response: 00:20:24.678 { 00:20:24.678 "code": -22, 00:20:24.678 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:24.678 } 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.678 09:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.618 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.618 "name": "raid_bdev1", 00:20:25.618 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:25.618 "strip_size_kb": 0, 00:20:25.618 "state": "online", 00:20:25.618 "raid_level": "raid1", 00:20:25.618 "superblock": true, 00:20:25.618 "num_base_bdevs": 2, 00:20:25.618 "num_base_bdevs_discovered": 1, 00:20:25.618 "num_base_bdevs_operational": 1, 00:20:25.618 "base_bdevs_list": [ 00:20:25.618 { 00:20:25.618 "name": null, 00:20:25.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.618 "is_configured": false, 00:20:25.618 "data_offset": 0, 00:20:25.618 "data_size": 7936 00:20:25.618 }, 00:20:25.618 { 00:20:25.618 "name": "BaseBdev2", 00:20:25.618 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:25.618 "is_configured": true, 00:20:25.618 "data_offset": 256, 00:20:25.619 "data_size": 7936 00:20:25.619 } 00:20:25.619 ] 00:20:25.619 }' 00:20:25.619 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.619 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.188 "name": "raid_bdev1", 00:20:26.188 "uuid": "6896ecf4-305b-432f-aa34-25bffc60e3bc", 00:20:26.188 "strip_size_kb": 0, 00:20:26.188 "state": "online", 00:20:26.188 "raid_level": "raid1", 00:20:26.188 "superblock": true, 00:20:26.188 "num_base_bdevs": 2, 00:20:26.188 "num_base_bdevs_discovered": 1, 00:20:26.188 "num_base_bdevs_operational": 1, 00:20:26.188 "base_bdevs_list": [ 00:20:26.188 { 00:20:26.188 "name": null, 00:20:26.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.188 "is_configured": false, 00:20:26.188 "data_offset": 0, 00:20:26.188 "data_size": 7936 00:20:26.188 }, 00:20:26.188 { 00:20:26.188 "name": "BaseBdev2", 00:20:26.188 "uuid": "7747a676-1d57-5d7a-b9b7-2dd3b826b9ee", 00:20:26.188 "is_configured": true, 00:20:26.188 "data_offset": 256, 00:20:26.188 "data_size": 7936 00:20:26.188 } 00:20:26.188 ] 00:20:26.188 }' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89605 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89605 ']' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89605 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89605 00:20:26.188 killing process with pid 89605 00:20:26.188 Received shutdown signal, test time was about 60.000000 seconds 00:20:26.188 00:20:26.188 Latency(us) 00:20:26.188 [2024-10-11T09:54:10.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.188 [2024-10-11T09:54:10.820Z] =================================================================================================================== 00:20:26.188 [2024-10-11T09:54:10.820Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89605' 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89605 00:20:26.188 [2024-10-11 09:54:10.688993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.188 [2024-10-11 09:54:10.689124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.188 09:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89605 00:20:26.188 [2024-10-11 09:54:10.689178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.188 [2024-10-11 09:54:10.689193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:26.448 [2024-10-11 09:54:10.977032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.837 09:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:27.837 00:20:27.837 real 0m17.529s 00:20:27.837 user 0m22.886s 00:20:27.837 sys 0m1.788s 00:20:27.837 09:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.837 09:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 ************************************ 00:20:27.837 END TEST raid_rebuild_test_sb_md_interleaved 00:20:27.837 ************************************ 00:20:27.837 09:54:12 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:27.837 09:54:12 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:27.837 09:54:12 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89605 ']' 00:20:27.837 09:54:12 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89605 00:20:27.837 09:54:12 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:27.837 00:20:27.837 real 12m17.576s 00:20:27.837 user 16m37.717s 00:20:27.837 sys 1m56.578s 00:20:27.837 09:54:12 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.837 09:54:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 ************************************ 00:20:27.837 END TEST bdev_raid 00:20:27.837 ************************************ 00:20:27.837 09:54:12 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:27.837 09:54:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:27.837 09:54:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:27.837 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 ************************************ 00:20:27.837 START TEST spdkcli_raid 00:20:27.837 ************************************ 00:20:27.837 09:54:12 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:27.837 * Looking for test storage... 00:20:27.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:27.837 09:54:12 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:27.837 09:54:12 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:27.837 09:54:12 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:27.837 09:54:12 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.837 09:54:12 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:27.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.838 --rc genhtml_branch_coverage=1 00:20:27.838 --rc genhtml_function_coverage=1 00:20:27.838 --rc genhtml_legend=1 00:20:27.838 --rc geninfo_all_blocks=1 00:20:27.838 --rc geninfo_unexecuted_blocks=1 00:20:27.838 00:20:27.838 ' 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:27.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.838 --rc genhtml_branch_coverage=1 00:20:27.838 --rc genhtml_function_coverage=1 00:20:27.838 --rc genhtml_legend=1 00:20:27.838 --rc geninfo_all_blocks=1 00:20:27.838 --rc geninfo_unexecuted_blocks=1 00:20:27.838 00:20:27.838 ' 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:27.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.838 --rc genhtml_branch_coverage=1 00:20:27.838 --rc genhtml_function_coverage=1 00:20:27.838 --rc genhtml_legend=1 00:20:27.838 --rc geninfo_all_blocks=1 00:20:27.838 --rc geninfo_unexecuted_blocks=1 00:20:27.838 00:20:27.838 ' 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:27.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.838 --rc genhtml_branch_coverage=1 00:20:27.838 --rc genhtml_function_coverage=1 00:20:27.838 --rc genhtml_legend=1 00:20:27.838 --rc geninfo_all_blocks=1 00:20:27.838 --rc geninfo_unexecuted_blocks=1 00:20:27.838 00:20:27.838 ' 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:27.838 09:54:12 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90280 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:27.838 09:54:12 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90280 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90280 ']' 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.838 09:54:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.108 [2024-10-11 09:54:12.542923] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:28.108 [2024-10-11 09:54:12.543508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90280 ] 00:20:28.108 [2024-10-11 09:54:12.709005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:28.367 [2024-10-11 09:54:12.834989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.367 [2024-10-11 09:54:12.835041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.306 09:54:13 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.306 09:54:13 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:20:29.306 09:54:13 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:29.306 09:54:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.306 09:54:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.306 09:54:13 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:29.306 09:54:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.306 09:54:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.306 09:54:13 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:29.306 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:29.306 ' 00:20:30.686 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:30.686 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:30.944 09:54:15 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:30.944 09:54:15 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.944 09:54:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.944 09:54:15 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:30.944 09:54:15 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.944 09:54:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.944 09:54:15 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:30.944 ' 00:20:32.324 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:32.324 09:54:16 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:32.324 09:54:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.324 09:54:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.324 09:54:16 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:32.324 09:54:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.324 09:54:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.324 09:54:16 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:32.324 09:54:16 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:32.892 09:54:17 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:32.892 09:54:17 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:32.892 09:54:17 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:32.892 09:54:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.892 09:54:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.892 09:54:17 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:32.892 09:54:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.892 09:54:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.893 09:54:17 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:32.893 ' 00:20:33.830 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:33.830 09:54:18 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:33.830 09:54:18 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.830 09:54:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.089 09:54:18 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:34.089 09:54:18 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.089 09:54:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.089 09:54:18 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:34.089 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:34.089 ' 00:20:35.468 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:35.468 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:35.468 09:54:20 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:35.468 09:54:20 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.468 09:54:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.468 09:54:20 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90280 00:20:35.469 09:54:20 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90280 ']' 00:20:35.469 09:54:20 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90280 00:20:35.469 09:54:20 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:20:35.469 09:54:20 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:35.469 09:54:20 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90280 00:20:35.729 killing process with pid 90280 00:20:35.729 09:54:20 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:35.729 09:54:20 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:35.729 09:54:20 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90280' 00:20:35.729 09:54:20 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90280 00:20:35.729 09:54:20 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90280 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90280 ']' 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90280 00:20:38.317 09:54:22 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90280 ']' 00:20:38.317 09:54:22 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90280 00:20:38.317 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90280) - No such process 00:20:38.317 09:54:22 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90280 is not found' 00:20:38.317 Process with pid 90280 is not found 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:38.317 09:54:22 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:38.317 00:20:38.317 real 0m10.253s 00:20:38.317 user 0m21.083s 00:20:38.317 sys 0m1.179s 00:20:38.317 09:54:22 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.317 09:54:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.317 ************************************ 00:20:38.317 END TEST spdkcli_raid 00:20:38.317 ************************************ 00:20:38.317 09:54:22 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:38.317 09:54:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:38.317 09:54:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.317 09:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:38.317 ************************************ 00:20:38.317 START TEST blockdev_raid5f 00:20:38.317 ************************************ 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:38.317 * Looking for test storage... 00:20:38.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.317 09:54:22 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:38.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.317 --rc genhtml_branch_coverage=1 00:20:38.317 --rc genhtml_function_coverage=1 00:20:38.317 --rc genhtml_legend=1 00:20:38.317 --rc geninfo_all_blocks=1 00:20:38.317 --rc geninfo_unexecuted_blocks=1 00:20:38.317 00:20:38.317 ' 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:38.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.317 --rc genhtml_branch_coverage=1 00:20:38.317 --rc genhtml_function_coverage=1 00:20:38.317 --rc genhtml_legend=1 00:20:38.317 --rc geninfo_all_blocks=1 00:20:38.317 --rc geninfo_unexecuted_blocks=1 00:20:38.317 00:20:38.317 ' 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:38.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.317 --rc genhtml_branch_coverage=1 00:20:38.317 --rc genhtml_function_coverage=1 00:20:38.317 --rc genhtml_legend=1 00:20:38.317 --rc geninfo_all_blocks=1 00:20:38.317 --rc geninfo_unexecuted_blocks=1 00:20:38.317 00:20:38.317 ' 00:20:38.317 09:54:22 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:38.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.317 --rc genhtml_branch_coverage=1 00:20:38.317 --rc genhtml_function_coverage=1 00:20:38.317 --rc genhtml_legend=1 00:20:38.317 --rc geninfo_all_blocks=1 00:20:38.317 --rc geninfo_unexecuted_blocks=1 00:20:38.317 00:20:38.317 ' 00:20:38.317 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:38.317 09:54:22 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:38.317 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:38.317 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90562 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90562 00:20:38.318 09:54:22 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:38.318 09:54:22 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90562 ']' 00:20:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.318 09:54:22 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.318 09:54:22 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.318 09:54:22 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.318 09:54:22 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.318 09:54:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.318 [2024-10-11 09:54:22.877019] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:38.318 [2024-10-11 09:54:22.877141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90562 ] 00:20:38.577 [2024-10-11 09:54:23.038865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.577 [2024-10-11 09:54:23.164130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.514 09:54:24 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.514 09:54:24 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:20:39.514 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:39.514 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:39.514 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:39.514 09:54:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.514 09:54:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.514 Malloc0 00:20:39.772 Malloc1 00:20:39.772 Malloc2 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.772 09:54:24 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.772 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:39.773 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "651d36b7-0043-4a58-9564-87baf1ea4c49"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "651d36b7-0043-4a58-9564-87baf1ea4c49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "651d36b7-0043-4a58-9564-87baf1ea4c49",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "45ea7288-e592-4542-8fb8-395048c796f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "028faf1a-a3a2-4979-9863-49d7f9971068",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "cef1133d-2674-4487-99d7-e706054057b9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:39.773 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:40.032 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:40.032 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:40.032 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:40.032 09:54:24 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90562 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90562 ']' 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90562 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90562 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.032 killing process with pid 90562 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90562' 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90562 00:20:40.032 09:54:24 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90562 00:20:42.572 09:54:26 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:42.572 09:54:26 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:42.572 09:54:26 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:42.572 09:54:26 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.572 09:54:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.572 ************************************ 00:20:42.572 START TEST bdev_hello_world 00:20:42.572 ************************************ 00:20:42.572 09:54:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:42.572 [2024-10-11 09:54:27.089934] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:42.572 [2024-10-11 09:54:27.090065] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90628 ] 00:20:42.831 [2024-10-11 09:54:27.235282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.831 [2024-10-11 09:54:27.374958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.400 [2024-10-11 09:54:27.924776] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:43.400 [2024-10-11 09:54:27.924828] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:43.400 [2024-10-11 09:54:27.924845] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:43.400 [2024-10-11 09:54:27.925376] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:43.400 [2024-10-11 09:54:27.925540] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:43.400 [2024-10-11 09:54:27.925564] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:43.400 [2024-10-11 09:54:27.925621] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:43.400 00:20:43.400 [2024-10-11 09:54:27.925649] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:44.779 00:20:44.779 real 0m2.290s 00:20:44.779 user 0m1.923s 00:20:44.779 sys 0m0.245s 00:20:44.779 09:54:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:44.779 09:54:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:44.779 ************************************ 00:20:44.779 END TEST bdev_hello_world 00:20:44.779 ************************************ 00:20:44.779 09:54:29 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:44.779 09:54:29 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:44.779 09:54:29 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.779 09:54:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.779 ************************************ 00:20:44.779 START TEST bdev_bounds 00:20:44.779 ************************************ 00:20:44.779 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:20:44.779 09:54:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90676 00:20:44.779 09:54:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:44.779 09:54:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:44.779 Process bdevio pid: 90676 00:20:44.779 09:54:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90676' 00:20:44.779 09:54:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90676 00:20:44.780 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90676 ']' 00:20:44.780 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.780 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.780 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.780 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.780 09:54:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:45.041 [2024-10-11 09:54:29.449487] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:45.041 [2024-10-11 09:54:29.449609] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90676 ] 00:20:45.041 [2024-10-11 09:54:29.613648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:45.310 [2024-10-11 09:54:29.748596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.310 [2024-10-11 09:54:29.748723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.310 [2024-10-11 09:54:29.748839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.879 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.879 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:20:45.879 09:54:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:45.879 I/O targets: 00:20:45.879 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:45.879 00:20:45.879 00:20:45.879 CUnit - A unit testing framework for C - Version 2.1-3 00:20:45.879 http://cunit.sourceforge.net/ 00:20:45.879 00:20:45.879 00:20:45.879 Suite: bdevio tests on: raid5f 00:20:45.879 Test: blockdev write read block ...passed 00:20:45.879 Test: blockdev write zeroes read block ...passed 00:20:45.879 Test: blockdev write zeroes read no split ...passed 00:20:46.139 Test: blockdev write zeroes read split ...passed 00:20:46.139 Test: blockdev write zeroes read split partial ...passed 00:20:46.139 Test: blockdev reset ...passed 00:20:46.139 Test: blockdev write read 8 blocks ...passed 00:20:46.139 Test: blockdev write read size > 128k ...passed 00:20:46.139 Test: blockdev write read invalid size ...passed 00:20:46.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:46.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:46.139 Test: blockdev write read max offset ...passed 00:20:46.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:46.139 Test: blockdev writev readv 8 blocks ...passed 00:20:46.139 Test: blockdev writev readv 30 x 1block ...passed 00:20:46.139 Test: blockdev writev readv block ...passed 00:20:46.139 Test: blockdev writev readv size > 128k ...passed 00:20:46.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:46.139 Test: blockdev comparev and writev ...passed 00:20:46.139 Test: blockdev nvme passthru rw ...passed 00:20:46.139 Test: blockdev nvme passthru vendor specific ...passed 00:20:46.139 Test: blockdev nvme admin passthru ...passed 00:20:46.139 Test: blockdev copy ...passed 00:20:46.139 00:20:46.139 Run Summary: Type Total Ran Passed Failed Inactive 00:20:46.139 suites 1 1 n/a 0 0 00:20:46.139 tests 23 23 23 0 0 00:20:46.139 asserts 130 130 130 0 n/a 00:20:46.139 00:20:46.139 Elapsed time = 0.653 seconds 00:20:46.139 0 00:20:46.139 09:54:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90676 00:20:46.139 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90676 ']' 00:20:46.139 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90676 00:20:46.139 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:20:46.139 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.139 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90676 00:20:46.399 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:46.399 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:46.399 killing process with pid 90676 00:20:46.399 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90676' 00:20:46.399 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90676 00:20:46.399 09:54:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90676 00:20:47.779 ************************************ 00:20:47.779 END TEST bdev_bounds 00:20:47.779 ************************************ 00:20:47.779 09:54:32 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:47.779 00:20:47.779 real 0m2.822s 00:20:47.779 user 0m7.032s 00:20:47.779 sys 0m0.379s 00:20:47.779 09:54:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:47.779 09:54:32 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:47.779 09:54:32 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:47.779 09:54:32 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:47.779 09:54:32 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:47.779 09:54:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:47.779 ************************************ 00:20:47.779 START TEST bdev_nbd 00:20:47.779 ************************************ 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90735 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90735 /var/tmp/spdk-nbd.sock 00:20:47.779 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90735 ']' 00:20:47.780 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:47.780 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:47.780 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:47.780 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.780 09:54:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:47.780 [2024-10-11 09:54:32.356383] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:47.780 [2024-10-11 09:54:32.356511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.039 [2024-10-11 09:54:32.521553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.039 [2024-10-11 09:54:32.649968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.979 1+0 records in 00:20:48.979 1+0 records out 00:20:48.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542783 s, 7.5 MB/s 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:48.979 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:49.238 { 00:20:49.238 "nbd_device": "/dev/nbd0", 00:20:49.238 "bdev_name": "raid5f" 00:20:49.238 } 00:20:49.238 ]' 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:49.238 { 00:20:49.238 "nbd_device": "/dev/nbd0", 00:20:49.238 "bdev_name": "raid5f" 00:20:49.238 } 00:20:49.238 ]' 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:49.238 09:54:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.498 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:49.757 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:50.017 /dev/nbd0 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.017 1+0 records in 00:20:50.017 1+0 records out 00:20:50.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454494 s, 9.0 MB/s 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.017 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:50.277 { 00:20:50.277 "nbd_device": "/dev/nbd0", 00:20:50.277 "bdev_name": "raid5f" 00:20:50.277 } 00:20:50.277 ]' 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:50.277 { 00:20:50.277 "nbd_device": "/dev/nbd0", 00:20:50.277 "bdev_name": "raid5f" 00:20:50.277 } 00:20:50.277 ]' 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:50.277 256+0 records in 00:20:50.277 256+0 records out 00:20:50.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132369 s, 79.2 MB/s 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:50.277 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:50.537 256+0 records in 00:20:50.537 256+0 records out 00:20:50.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343714 s, 30.5 MB/s 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.537 09:54:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:50.805 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:50.806 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:51.076 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:51.337 malloc_lvol_verify 00:20:51.337 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:51.337 a577011e-9706-4c83-a87b-2da4ad2365c6 00:20:51.337 09:54:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:51.596 360a70f7-d731-4733-8ab6-60be5e026860 00:20:51.596 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:51.855 /dev/nbd0 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:51.855 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:20:51.855 done 00:20:51.855 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:51.855 00:20:51.855 Allocating group tables: 0/1 done 00:20:51.855 Writing inode tables: 0/1 done 00:20:51.855 Creating journal (1024 blocks): done 00:20:51.855 Writing superblocks and filesystem accounting information: 0/1 done 00:20:51.855 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.855 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90735 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90735 ']' 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90735 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90735 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90735' 00:20:52.115 killing process with pid 90735 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90735 00:20:52.115 09:54:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90735 00:20:53.495 09:54:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:53.495 00:20:53.495 real 0m5.838s 00:20:53.495 user 0m7.957s 00:20:53.495 sys 0m1.385s 00:20:53.495 09:54:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:53.495 09:54:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:53.495 ************************************ 00:20:53.495 END TEST bdev_nbd 00:20:53.495 ************************************ 00:20:53.755 09:54:38 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:53.755 09:54:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:53.755 09:54:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:53.755 09:54:38 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:53.755 09:54:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:53.755 09:54:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.755 09:54:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:53.755 ************************************ 00:20:53.755 START TEST bdev_fio 00:20:53.755 ************************************ 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:53.755 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:53.755 ************************************ 00:20:53.755 START TEST bdev_fio_rw_verify 00:20:53.755 ************************************ 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:53.755 09:54:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.015 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:54.015 fio-3.35 00:20:54.015 Starting 1 thread 00:21:06.233 00:21:06.233 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90941: Fri Oct 11 09:54:49 2024 00:21:06.233 read: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(404MiB/10001msec) 00:21:06.233 slat (nsec): min=17984, max=83945, avg=23129.21, stdev=3480.35 00:21:06.233 clat (usec): min=10, max=390, avg=153.73, stdev=57.25 00:21:06.233 lat (usec): min=30, max=414, avg=176.86, stdev=58.13 00:21:06.233 clat percentiles (usec): 00:21:06.233 | 50.000th=[ 149], 99.000th=[ 277], 99.900th=[ 310], 99.990th=[ 351], 00:21:06.233 | 99.999th=[ 388] 00:21:06.233 write: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(418MiB/9869msec); 0 zone resets 00:21:06.234 slat (usec): min=8, max=2426, avg=19.61, stdev= 8.76 00:21:06.234 clat (usec): min=33, max=2787, avg=354.57, stdev=57.74 00:21:06.234 lat (usec): min=51, max=2810, avg=374.19, stdev=59.94 00:21:06.234 clat percentiles (usec): 00:21:06.234 | 50.000th=[ 351], 99.000th=[ 490], 99.900th=[ 570], 99.990th=[ 938], 00:21:06.234 | 99.999th=[ 1045] 00:21:06.234 bw ( KiB/s): min=39560, max=48392, per=98.62%, avg=42789.05, stdev=2343.11, samples=19 00:21:06.234 iops : min= 9890, max=12098, avg=10697.26, stdev=585.78, samples=19 00:21:06.234 lat (usec) : 20=0.01%, 50=0.01%, 100=10.94%, 250=37.26%, 500=51.48% 00:21:06.234 lat (usec) : 750=0.30%, 1000=0.02% 00:21:06.234 lat (msec) : 2=0.01%, 4=0.01% 00:21:06.234 cpu : usr=98.99%, sys=0.34%, ctx=28, majf=0, minf=8689 00:21:06.234 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.234 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.234 issued rwts: total=103436,107052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.234 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:06.234 00:21:06.234 Run status group 0 (all jobs): 00:21:06.234 READ: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=404MiB (424MB), run=10001-10001msec 00:21:06.234 WRITE: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=418MiB (438MB), run=9869-9869msec 00:21:06.492 ----------------------------------------------------- 00:21:06.492 Suppressions used: 00:21:06.492 count bytes template 00:21:06.493 1 7 /usr/src/fio/parse.c 00:21:06.493 397 38112 /usr/src/fio/iolog.c 00:21:06.493 1 8 libtcmalloc_minimal.so 00:21:06.493 1 904 libcrypto.so 00:21:06.493 ----------------------------------------------------- 00:21:06.493 00:21:06.754 00:21:06.754 real 0m12.854s 00:21:06.754 user 0m13.039s 00:21:06.754 sys 0m0.638s 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:06.754 ************************************ 00:21:06.754 END TEST bdev_fio_rw_verify 00:21:06.754 ************************************ 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "651d36b7-0043-4a58-9564-87baf1ea4c49"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "651d36b7-0043-4a58-9564-87baf1ea4c49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "651d36b7-0043-4a58-9564-87baf1ea4c49",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "45ea7288-e592-4542-8fb8-395048c796f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "028faf1a-a3a2-4979-9863-49d7f9971068",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "cef1133d-2674-4487-99d7-e706054057b9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:06.754 /home/vagrant/spdk_repo/spdk 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:06.754 00:21:06.754 real 0m13.126s 00:21:06.754 user 0m13.156s 00:21:06.754 sys 0m0.773s 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:06.754 09:54:51 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:06.754 ************************************ 00:21:06.754 END TEST bdev_fio 00:21:06.754 ************************************ 00:21:06.754 09:54:51 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:06.754 09:54:51 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:06.754 09:54:51 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:21:06.754 09:54:51 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:06.754 09:54:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:06.754 ************************************ 00:21:06.754 START TEST bdev_verify 00:21:06.754 ************************************ 00:21:06.754 09:54:51 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:07.016 [2024-10-11 09:54:51.442346] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:07.016 [2024-10-11 09:54:51.442462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91105 ] 00:21:07.016 [2024-10-11 09:54:51.609102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:07.276 [2024-10-11 09:54:51.736162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.276 [2024-10-11 09:54:51.736219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.845 Running I/O for 5 seconds... 00:21:09.725 9262.00 IOPS, 36.18 MiB/s [2024-10-11T09:54:55.738Z] 9595.00 IOPS, 37.48 MiB/s [2024-10-11T09:54:56.675Z] 9603.67 IOPS, 37.51 MiB/s [2024-10-11T09:54:57.631Z] 9632.25 IOPS, 37.63 MiB/s [2024-10-11T09:54:57.631Z] 9580.00 IOPS, 37.42 MiB/s 00:21:12.999 Latency(us) 00:21:12.999 [2024-10-11T09:54:57.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.999 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:12.999 Verification LBA range: start 0x0 length 0x2000 00:21:12.999 raid5f : 5.02 4480.40 17.50 0.00 0.00 43040.98 296.92 35257.80 00:21:12.999 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.999 Verification LBA range: start 0x2000 length 0x2000 00:21:12.999 raid5f : 5.02 5092.33 19.89 0.00 0.00 37926.02 336.27 28160.45 00:21:12.999 [2024-10-11T09:54:57.631Z] =================================================================================================================== 00:21:12.999 [2024-10-11T09:54:57.631Z] Total : 9572.73 37.39 0.00 0.00 40321.11 296.92 35257.80 00:21:14.383 00:21:14.383 real 0m7.358s 00:21:14.383 user 0m13.571s 00:21:14.383 sys 0m0.282s 00:21:14.383 09:54:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:14.383 09:54:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:14.383 ************************************ 00:21:14.383 END TEST bdev_verify 00:21:14.383 ************************************ 00:21:14.384 09:54:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:14.384 09:54:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:21:14.384 09:54:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.384 09:54:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:14.384 ************************************ 00:21:14.384 START TEST bdev_verify_big_io 00:21:14.384 ************************************ 00:21:14.384 09:54:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:14.384 [2024-10-11 09:54:58.879784] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:14.384 [2024-10-11 09:54:58.879922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91198 ] 00:21:14.643 [2024-10-11 09:54:59.049908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:14.643 [2024-10-11 09:54:59.177652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.643 [2024-10-11 09:54:59.177711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.213 Running I/O for 5 seconds... 00:21:17.534 506.00 IOPS, 31.62 MiB/s [2024-10-11T09:55:03.106Z] 634.00 IOPS, 39.62 MiB/s [2024-10-11T09:55:04.045Z] 676.67 IOPS, 42.29 MiB/s [2024-10-11T09:55:04.984Z] 698.00 IOPS, 43.62 MiB/s [2024-10-11T09:55:05.244Z] 722.80 IOPS, 45.17 MiB/s 00:21:20.612 Latency(us) 00:21:20.612 [2024-10-11T09:55:05.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.612 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:20.612 Verification LBA range: start 0x0 length 0x200 00:21:20.612 raid5f : 5.35 355.75 22.23 0.00 0.00 8789692.71 266.51 390125.22 00:21:20.612 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:20.612 Verification LBA range: start 0x200 length 0x200 00:21:20.612 raid5f : 5.30 371.19 23.20 0.00 0.00 8523615.90 203.91 362651.61 00:21:20.612 [2024-10-11T09:55:05.244Z] =================================================================================================================== 00:21:20.612 [2024-10-11T09:55:05.244Z] Total : 726.94 45.43 0.00 0.00 8654524.04 203.91 390125.22 00:21:21.991 00:21:21.991 real 0m7.710s 00:21:21.991 user 0m14.234s 00:21:21.991 sys 0m0.298s 00:21:21.991 09:55:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.991 09:55:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.991 ************************************ 00:21:21.991 END TEST bdev_verify_big_io 00:21:21.991 ************************************ 00:21:21.991 09:55:06 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:21.991 09:55:06 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:21.991 09:55:06 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.991 09:55:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:21.991 ************************************ 00:21:21.991 START TEST bdev_write_zeroes 00:21:21.991 ************************************ 00:21:21.991 09:55:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:22.251 [2024-10-11 09:55:06.664077] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:22.251 [2024-10-11 09:55:06.664246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91302 ] 00:21:22.251 [2024-10-11 09:55:06.835824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.511 [2024-10-11 09:55:06.954460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.080 Running I/O for 1 seconds... 00:21:24.019 26895.00 IOPS, 105.06 MiB/s 00:21:24.019 Latency(us) 00:21:24.019 [2024-10-11T09:55:08.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.019 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:24.019 raid5f : 1.01 26856.89 104.91 0.00 0.00 4750.98 1545.39 6496.36 00:21:24.019 [2024-10-11T09:55:08.651Z] =================================================================================================================== 00:21:24.019 [2024-10-11T09:55:08.651Z] Total : 26856.89 104.91 0.00 0.00 4750.98 1545.39 6496.36 00:21:25.448 00:21:25.448 real 0m3.284s 00:21:25.448 user 0m2.874s 00:21:25.448 sys 0m0.281s 00:21:25.448 09:55:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:25.448 09:55:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:25.448 ************************************ 00:21:25.448 END TEST bdev_write_zeroes 00:21:25.448 ************************************ 00:21:25.448 09:55:09 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:25.448 09:55:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:25.448 09:55:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:25.448 09:55:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.448 ************************************ 00:21:25.448 START TEST bdev_json_nonenclosed 00:21:25.448 ************************************ 00:21:25.448 09:55:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:25.448 [2024-10-11 09:55:10.017339] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:25.448 [2024-10-11 09:55:10.017600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91355 ] 00:21:25.707 [2024-10-11 09:55:10.189021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.707 [2024-10-11 09:55:10.308577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.707 [2024-10-11 09:55:10.308677] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:25.707 [2024-10-11 09:55:10.308698] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:25.707 [2024-10-11 09:55:10.308708] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:25.965 00:21:25.965 real 0m0.652s 00:21:25.965 user 0m0.415s 00:21:25.965 sys 0m0.131s 00:21:25.965 09:55:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:25.965 ************************************ 00:21:25.965 END TEST bdev_json_nonenclosed 00:21:25.965 ************************************ 00:21:25.965 09:55:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:26.225 09:55:10 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:26.225 09:55:10 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:26.225 09:55:10 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:26.225 09:55:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:26.225 ************************************ 00:21:26.225 START TEST bdev_json_nonarray 00:21:26.225 ************************************ 00:21:26.225 09:55:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:26.225 [2024-10-11 09:55:10.729471] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:26.225 [2024-10-11 09:55:10.729676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91385 ] 00:21:26.485 [2024-10-11 09:55:10.890218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.485 [2024-10-11 09:55:11.009334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.485 [2024-10-11 09:55:11.009525] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:26.485 [2024-10-11 09:55:11.009590] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:26.485 [2024-10-11 09:55:11.009612] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:26.745 00:21:26.745 real 0m0.623s 00:21:26.745 user 0m0.398s 00:21:26.745 sys 0m0.120s 00:21:26.745 ************************************ 00:21:26.745 END TEST bdev_json_nonarray 00:21:26.745 ************************************ 00:21:26.745 09:55:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.745 09:55:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:26.745 09:55:11 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:26.745 00:21:26.745 real 0m48.815s 00:21:26.745 user 1m6.098s 00:21:26.745 sys 0m5.030s 00:21:26.745 09:55:11 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.745 09:55:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:26.745 ************************************ 00:21:26.745 END TEST blockdev_raid5f 00:21:26.745 ************************************ 00:21:27.004 09:55:11 -- spdk/autotest.sh@194 -- # uname -s 00:21:27.004 09:55:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@256 -- # timing_exit lib 00:21:27.004 09:55:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.004 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:27.004 09:55:11 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:27.004 09:55:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:27.004 09:55:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:27.004 09:55:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:27.004 09:55:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.004 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:27.004 09:55:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:27.004 09:55:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:27.004 09:55:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:27.004 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:28.911 INFO: APP EXITING 00:21:28.911 INFO: killing all VMs 00:21:28.911 INFO: killing vhost app 00:21:28.911 INFO: EXIT DONE 00:21:29.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:29.481 Waiting for block devices as requested 00:21:29.740 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:29.740 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.677 Cleaning 00:21:30.677 Removing: /var/run/dpdk/spdk0/config 00:21:30.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:30.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:30.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:30.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:30.677 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:30.677 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:30.677 Removing: /dev/shm/spdk_tgt_trace.pid57132 00:21:30.677 Removing: /var/run/dpdk/spdk0 00:21:30.677 Removing: /var/run/dpdk/spdk_pid56886 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57132 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57371 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57476 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57532 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57671 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57689 00:21:30.677 Removing: /var/run/dpdk/spdk_pid57905 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58035 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58153 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58290 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58416 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58450 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58492 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58568 00:21:30.677 Removing: /var/run/dpdk/spdk_pid58696 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59167 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59253 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59340 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59356 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59532 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59548 00:21:30.677 Removing: /var/run/dpdk/spdk_pid59718 00:21:30.937 Removing: /var/run/dpdk/spdk_pid59734 00:21:30.937 Removing: /var/run/dpdk/spdk_pid59809 00:21:30.937 Removing: /var/run/dpdk/spdk_pid59833 00:21:30.937 Removing: /var/run/dpdk/spdk_pid59902 00:21:30.937 Removing: /var/run/dpdk/spdk_pid59920 00:21:30.937 Removing: /var/run/dpdk/spdk_pid60122 00:21:30.937 Removing: /var/run/dpdk/spdk_pid60164 00:21:30.937 Removing: /var/run/dpdk/spdk_pid60253 00:21:30.937 Removing: /var/run/dpdk/spdk_pid61625 00:21:30.937 Removing: /var/run/dpdk/spdk_pid61831 00:21:30.937 Removing: /var/run/dpdk/spdk_pid61982 00:21:30.937 Removing: /var/run/dpdk/spdk_pid62631 00:21:30.937 Removing: /var/run/dpdk/spdk_pid62837 00:21:30.937 Removing: /var/run/dpdk/spdk_pid62983 00:21:30.937 Removing: /var/run/dpdk/spdk_pid63626 00:21:30.937 Removing: /var/run/dpdk/spdk_pid63956 00:21:30.937 Removing: /var/run/dpdk/spdk_pid64102 00:21:30.937 Removing: /var/run/dpdk/spdk_pid65498 00:21:30.937 Removing: /var/run/dpdk/spdk_pid65751 00:21:30.937 Removing: /var/run/dpdk/spdk_pid65902 00:21:30.937 Removing: /var/run/dpdk/spdk_pid67305 00:21:30.937 Removing: /var/run/dpdk/spdk_pid67558 00:21:30.937 Removing: /var/run/dpdk/spdk_pid67704 00:21:30.937 Removing: /var/run/dpdk/spdk_pid69101 00:21:30.937 Removing: /var/run/dpdk/spdk_pid69552 00:21:30.937 Removing: /var/run/dpdk/spdk_pid69700 00:21:30.937 Removing: /var/run/dpdk/spdk_pid71207 00:21:30.937 Removing: /var/run/dpdk/spdk_pid71466 00:21:30.937 Removing: /var/run/dpdk/spdk_pid71617 00:21:30.937 Removing: /var/run/dpdk/spdk_pid73119 00:21:30.937 Removing: /var/run/dpdk/spdk_pid73389 00:21:30.937 Removing: /var/run/dpdk/spdk_pid73536 00:21:30.937 Removing: /var/run/dpdk/spdk_pid75037 00:21:30.937 Removing: /var/run/dpdk/spdk_pid75530 00:21:30.937 Removing: /var/run/dpdk/spdk_pid75685 00:21:30.937 Removing: /var/run/dpdk/spdk_pid75832 00:21:30.937 Removing: /var/run/dpdk/spdk_pid76286 00:21:30.937 Removing: /var/run/dpdk/spdk_pid77017 00:21:30.937 Removing: /var/run/dpdk/spdk_pid77413 00:21:30.937 Removing: /var/run/dpdk/spdk_pid78108 00:21:30.937 Removing: /var/run/dpdk/spdk_pid78558 00:21:30.937 Removing: /var/run/dpdk/spdk_pid79334 00:21:30.937 Removing: /var/run/dpdk/spdk_pid79743 00:21:30.937 Removing: /var/run/dpdk/spdk_pid81709 00:21:30.937 Removing: /var/run/dpdk/spdk_pid82147 00:21:30.937 Removing: /var/run/dpdk/spdk_pid82591 00:21:30.937 Removing: /var/run/dpdk/spdk_pid84692 00:21:30.937 Removing: /var/run/dpdk/spdk_pid85177 00:21:30.938 Removing: /var/run/dpdk/spdk_pid85698 00:21:30.938 Removing: /var/run/dpdk/spdk_pid86756 00:21:30.938 Removing: /var/run/dpdk/spdk_pid87079 00:21:30.938 Removing: /var/run/dpdk/spdk_pid88015 00:21:30.938 Removing: /var/run/dpdk/spdk_pid88339 00:21:30.938 Removing: /var/run/dpdk/spdk_pid89272 00:21:30.938 Removing: /var/run/dpdk/spdk_pid89605 00:21:30.938 Removing: /var/run/dpdk/spdk_pid90280 00:21:30.938 Removing: /var/run/dpdk/spdk_pid90562 00:21:30.938 Removing: /var/run/dpdk/spdk_pid90628 00:21:31.197 Removing: /var/run/dpdk/spdk_pid90676 00:21:31.197 Removing: /var/run/dpdk/spdk_pid90926 00:21:31.197 Removing: /var/run/dpdk/spdk_pid91105 00:21:31.197 Removing: /var/run/dpdk/spdk_pid91198 00:21:31.197 Removing: /var/run/dpdk/spdk_pid91302 00:21:31.197 Removing: /var/run/dpdk/spdk_pid91355 00:21:31.197 Removing: /var/run/dpdk/spdk_pid91385 00:21:31.197 Clean 00:21:31.197 09:55:15 -- common/autotest_common.sh@1451 -- # return 0 00:21:31.197 09:55:15 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:31.197 09:55:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.197 09:55:15 -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 09:55:15 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:31.197 09:55:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.197 09:55:15 -- common/autotest_common.sh@10 -- # set +x 00:21:31.197 09:55:15 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:31.197 09:55:15 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:31.197 09:55:15 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:31.197 09:55:15 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:31.197 09:55:15 -- spdk/autotest.sh@394 -- # hostname 00:21:31.197 09:55:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:31.457 geninfo: WARNING: invalid characters removed from testname! 00:21:58.025 09:55:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:58.025 09:55:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.950 09:55:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.485 09:55:46 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.392 09:55:48 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:06.928 09:55:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:08.835 09:55:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:08.835 09:55:53 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:22:08.835 09:55:53 -- common/autotest_common.sh@1691 -- $ lcov --version 00:22:08.835 09:55:53 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:22:09.095 09:55:53 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:22:09.095 09:55:53 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:22:09.095 09:55:53 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:22:09.095 09:55:53 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:22:09.095 09:55:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:09.095 09:55:53 -- scripts/common.sh@336 -- $ read -ra ver1 00:22:09.095 09:55:53 -- scripts/common.sh@337 -- $ IFS=.-: 00:22:09.095 09:55:53 -- scripts/common.sh@337 -- $ read -ra ver2 00:22:09.095 09:55:53 -- scripts/common.sh@338 -- $ local 'op=<' 00:22:09.095 09:55:53 -- scripts/common.sh@340 -- $ ver1_l=2 00:22:09.095 09:55:53 -- scripts/common.sh@341 -- $ ver2_l=1 00:22:09.095 09:55:53 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:22:09.095 09:55:53 -- scripts/common.sh@344 -- $ case "$op" in 00:22:09.095 09:55:53 -- scripts/common.sh@345 -- $ : 1 00:22:09.095 09:55:53 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:22:09.095 09:55:53 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.095 09:55:53 -- scripts/common.sh@365 -- $ decimal 1 00:22:09.095 09:55:53 -- scripts/common.sh@353 -- $ local d=1 00:22:09.095 09:55:53 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:09.095 09:55:53 -- scripts/common.sh@355 -- $ echo 1 00:22:09.095 09:55:53 -- scripts/common.sh@365 -- $ ver1[v]=1 00:22:09.095 09:55:53 -- scripts/common.sh@366 -- $ decimal 2 00:22:09.095 09:55:53 -- scripts/common.sh@353 -- $ local d=2 00:22:09.095 09:55:53 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:09.095 09:55:53 -- scripts/common.sh@355 -- $ echo 2 00:22:09.095 09:55:53 -- scripts/common.sh@366 -- $ ver2[v]=2 00:22:09.095 09:55:53 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:22:09.095 09:55:53 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:22:09.095 09:55:53 -- scripts/common.sh@368 -- $ return 0 00:22:09.095 09:55:53 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.095 09:55:53 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:22:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.095 --rc genhtml_branch_coverage=1 00:22:09.095 --rc genhtml_function_coverage=1 00:22:09.095 --rc genhtml_legend=1 00:22:09.095 --rc geninfo_all_blocks=1 00:22:09.095 --rc geninfo_unexecuted_blocks=1 00:22:09.095 00:22:09.095 ' 00:22:09.095 09:55:53 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:22:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.095 --rc genhtml_branch_coverage=1 00:22:09.095 --rc genhtml_function_coverage=1 00:22:09.095 --rc genhtml_legend=1 00:22:09.095 --rc geninfo_all_blocks=1 00:22:09.095 --rc geninfo_unexecuted_blocks=1 00:22:09.095 00:22:09.095 ' 00:22:09.095 09:55:53 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:22:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.095 --rc genhtml_branch_coverage=1 00:22:09.095 --rc genhtml_function_coverage=1 00:22:09.095 --rc genhtml_legend=1 00:22:09.095 --rc geninfo_all_blocks=1 00:22:09.095 --rc geninfo_unexecuted_blocks=1 00:22:09.095 00:22:09.095 ' 00:22:09.095 09:55:53 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:22:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.095 --rc genhtml_branch_coverage=1 00:22:09.095 --rc genhtml_function_coverage=1 00:22:09.095 --rc genhtml_legend=1 00:22:09.095 --rc geninfo_all_blocks=1 00:22:09.095 --rc geninfo_unexecuted_blocks=1 00:22:09.095 00:22:09.095 ' 00:22:09.095 09:55:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:09.095 09:55:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:09.095 09:55:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:09.095 09:55:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.095 09:55:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.095 09:55:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.096 09:55:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.096 09:55:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.096 09:55:53 -- paths/export.sh@5 -- $ export PATH 00:22:09.096 09:55:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.096 09:55:53 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:09.096 09:55:53 -- common/autobuild_common.sh@486 -- $ date +%s 00:22:09.096 09:55:53 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728640553.XXXXXX 00:22:09.096 09:55:53 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728640553.naTjJT 00:22:09.096 09:55:53 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:22:09.096 09:55:53 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:22:09.096 09:55:53 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:09.096 09:55:53 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:09.096 09:55:53 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:09.096 09:55:53 -- common/autobuild_common.sh@502 -- $ get_config_params 00:22:09.096 09:55:53 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:22:09.096 09:55:53 -- common/autotest_common.sh@10 -- $ set +x 00:22:09.096 09:55:53 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:22:09.096 09:55:53 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:22:09.096 09:55:53 -- pm/common@17 -- $ local monitor 00:22:09.096 09:55:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:09.096 09:55:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:09.096 09:55:53 -- pm/common@25 -- $ sleep 1 00:22:09.096 09:55:53 -- pm/common@21 -- $ date +%s 00:22:09.096 09:55:53 -- pm/common@21 -- $ date +%s 00:22:09.096 09:55:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728640553 00:22:09.096 09:55:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728640553 00:22:09.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728640553_collect-vmstat.pm.log 00:22:09.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728640553_collect-cpu-load.pm.log 00:22:10.035 09:55:54 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:22:10.035 09:55:54 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:22:10.035 09:55:54 -- spdk/autopackage.sh@14 -- $ timing_finish 00:22:10.035 09:55:54 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:10.035 09:55:54 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:10.035 09:55:54 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:10.035 09:55:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:10.035 09:55:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:10.035 09:55:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:10.035 09:55:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:10.035 09:55:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:10.035 09:55:54 -- pm/common@44 -- $ pid=92909 00:22:10.035 09:55:54 -- pm/common@50 -- $ kill -TERM 92909 00:22:10.035 09:55:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:10.035 09:55:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:10.035 09:55:54 -- pm/common@44 -- $ pid=92910 00:22:10.035 09:55:54 -- pm/common@50 -- $ kill -TERM 92910 00:22:10.035 + [[ -n 5429 ]] 00:22:10.035 + sudo kill 5429 00:22:10.045 [Pipeline] } 00:22:10.060 [Pipeline] // timeout 00:22:10.065 [Pipeline] } 00:22:10.078 [Pipeline] // stage 00:22:10.083 [Pipeline] } 00:22:10.097 [Pipeline] // catchError 00:22:10.106 [Pipeline] stage 00:22:10.108 [Pipeline] { (Stop VM) 00:22:10.119 [Pipeline] sh 00:22:10.400 + vagrant halt 00:22:12.934 ==> default: Halting domain... 00:22:21.079 [Pipeline] sh 00:22:21.362 + vagrant destroy -f 00:22:23.925 ==> default: Removing domain... 00:22:23.938 [Pipeline] sh 00:22:24.220 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:24.229 [Pipeline] } 00:22:24.244 [Pipeline] // stage 00:22:24.250 [Pipeline] } 00:22:24.264 [Pipeline] // dir 00:22:24.269 [Pipeline] } 00:22:24.284 [Pipeline] // wrap 00:22:24.289 [Pipeline] } 00:22:24.302 [Pipeline] // catchError 00:22:24.311 [Pipeline] stage 00:22:24.313 [Pipeline] { (Epilogue) 00:22:24.325 [Pipeline] sh 00:22:24.609 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:29.899 [Pipeline] catchError 00:22:29.901 [Pipeline] { 00:22:29.913 [Pipeline] sh 00:22:30.197 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:30.197 Artifacts sizes are good 00:22:30.207 [Pipeline] } 00:22:30.220 [Pipeline] // catchError 00:22:30.229 [Pipeline] archiveArtifacts 00:22:30.235 Archiving artifacts 00:22:30.356 [Pipeline] cleanWs 00:22:30.372 [WS-CLEANUP] Deleting project workspace... 00:22:30.372 [WS-CLEANUP] Deferred wipeout is used... 00:22:30.397 [WS-CLEANUP] done 00:22:30.399 [Pipeline] } 00:22:30.416 [Pipeline] // stage 00:22:30.421 [Pipeline] } 00:22:30.435 [Pipeline] // node 00:22:30.441 [Pipeline] End of Pipeline 00:22:30.499 Finished: SUCCESS